The trashified internet meets AI-generated content
And then China's censorship mechanism enters the chat
Image courtesy of the Cyberspace Administration of China
Search engines and platforms know you intimately. You’ve probably noticed them advertising to you directly, and maybe you’ve initiated protocols such as not saying anything aloud you don’t want to see converted to an Instagram ad.
That version of you – the consumer of goods, services, and unverified content – is the version of you AI generated content wants to bombard with individualized marketing emails and articles.
They’re making progress reaching you. We are now in a puberty-like phase where most people still use search engines, but AI is advanced enough to flood them with garbage content. As we all plug AI into our lives to varying degrees in the ways we can control, the “regular” internet we’re used to has already pressed Go on AI-generated content (AIGC), resulting in a barrage of AI-generated spam, “lifted” articles and text, and at-best-bad videos and images. (That vertical integration makes sense, since Google and Meta have heavily invested in their own models.)
Nilay Patel, editor-in-chief of The Verge, described the amount of AIGC currently online to Ezra Klein in a podcast interview on the subject as “more than you think and about to go to infinity.” Google admitted to 404 Media that it is not particularly interested in deciphering whether a news story posted to Google News is human-written or AI-generated. The journalist Alexandra Levine posted earlier this week about finding an AI ripoff of her work for the first time, noting the story took many interviews and months to report.
In China, regulators look at AIGC and see the potential for instantaneously duplicable and dangerous messages relating to politics or society. But that doesn’t mean their goal is simply to eradicate AIGC and the economic growth that comes with it. Instead, China’s leaders see an opportunity to be a global leader on crafting standards for a digital sphere transformed by AI; that’s why Xi Jinping announced the Global AI Governance Initiative in October.
China has also explicitly linked AIGC to national security; in a two-part video to mark the occasion of National Security Day last week, University of Science and Technology of China executive dean Yu Nenghai explained the risks of AIGC. Spliced between abruptly cut images that ironically called to mind what you’d get if you told an AI image generator to depict “technology,” an apparently not media trained Yu read his speech off a tablet.
He cautioned that if not properly managed and moderated (where the former refers to forms of regulation and the latter is censorship) AI-generated content could pose threats to China’s “cultural, technological, and economic security.” He also warned of the “Western values and ideologies” embedded in some models; providers should not fall victim to them.
Clear? Bright?
The Cyberspace Administration of China’s yearslong 晴朗 “clear and bright” campaign has a new target: AIGC. The 2024 Special Action lists among its priorities standardizing synthetic content labeling and deleting AIGC that is “prone to causing confusion and misrecognition to the public.”
Effective labeling could be good for users, regulators, and companies everywhere. The kind of content that might fall under “confusion-prone” could include synthetic information someone made with the intention of spurring false and alarming content: for example, a year ago, a man was arrested in Gansu province for using ChatGPT to create a story about a deadly train crash that never occurred. It was the first time a suspect was arrested for using a chatbot to generate harmful information anywhere in the world.
The “clear and bright” special action also targets marketing – which may be lower stakes but also occurs much more frequently than most recipients would like. “Dispose of non-compliant accounts that use generation and synthesis technology to create rumors and marketing hype,” it says.
As I’ve written about before, the CAC has an unsettling relationship to getting rid of “rumors.” (Hint: rumor refutation is used as grounds for censoring political and historical subjects.) But still, product and marketing-related hype certainly exists – and many of us wish it didn’t.
Synthetic marketing, which is on track to become more prominent, will make our daily lives worse and more confusing. Yet it is a lower priority compared to the action movie tales Silicon Valley executives tell about “severe AI risk.” It’s not that the systems they’re creating won’t ever spawn murderous robots; it’s that right now, false (and irritating) information in all forms is already distributed en masse.
The now-term issues, like regulating synthetic content or IP concerns surrounding copyrighted training data, are glossed over or sent to the courts. Facilitating a future utopia – or preventing dystopia – is too tantalizing.
In that context, China’s censorship apparatus may make it oddly well-suited to discontinue the proliferation of garbage AIGC. In doing so, it would be responding to a global now-term issue many democratic countries aren’t willing or able to prioritize (limiting the spread of false and troubling AIGC) because of its unique now-and-always priority of controlling the information ecosystem.
It’s debatable whether a censored but less AIGC-trashified internet would actually be “clear and bright.” But it would mark one good use of an otherwise highly problematic speech control system.
You know what you eat
People might still be hemming and hawing over whether they want to AI-ify the content they consume, but the (very few) people who rule the internet have decided for us: we’re all in. Recognizing and protecting ourselves from the scams or falsehoods embedded in AIGC is our own responsibility.
In China, authorities’ mandate to control content might result in AIGC reaching users through more transparent channels, like a griefbot service that allows customers to simulate the experience of speaking to a deceased loved one. In contrast to a flurry of authorless emails trying to sell you flights that may or may not be real, customers who sign up for AIGC-based services like that are at least aware of what they have signed up for.
But either way, humans are the consumers – distinct from being the beneficiaries. AI firms, whose interests have directly fused with search engines and social media platforms, want their systems consuming a steady diet of human-generated content. Whether or not the diet of other humans leans more human-generated or synthetic is not their concern. It should be yours.
What else I’m writing
I’ve been covering the demise of TikTok in the US for Forbes. So far I’ve covered why and how Congress’ China obsession obscures other dangers of social media as well as the significance of linking foreign aid to the far less urgent (and indeed, arguably unnecessary) TikTok ban. I will have more to say.
A report I worked on for the United Nations on the state of sustainable development financing is out. Read it and, understandably, weep: things are bleak. But as we reiterate, the time to rescue the sustainable development goals is quite evidently right now — and still possible.