What China’s ‘interim measures’ can teach us about the future of AI

Matthew McKeever
9 min readJul 15, 2023

(I’m periodically editing this stylistically as a sort of informal A/B test to see if any style works better. At the bottom, if you make it that far, you’ll see the former intro paragraph; I changed the title too, which was formerly China’s ‘interim measures’ and the future of AI. Also to correct massive typos.)

What is AI? This question, so often lurking but not often directly approached, is a good one to ask.

The parallel question is helpful in other domains: thus we ask whether Bitcoin is a security, whether Uber drivers are employees, whether Twitter is a publisher, questions in the first instance to settle things like tax brackets and duties but which shed light on the nature of money or work or the ‘public sphere’.

The same thing applies to AI. Trying to answer what a model like GPT-4 is, by looking at how it interacts with the legal systems it is subject to, shed light on the nature and future of generative systems.

My aim in this post is to consider recent developments in China. I’ll show how settling a rather narrow definitional question — whether we should think of language models as platforms or as programs — in fact sheds helpful light on important political questions.

Language models: platforms or programs?

The popular 2023 large language models (LLMs), we should note, are kind of weird. Recall there are several models for how we get things done on the computer, but one central distinction turns on where and how a given bit of computation happens. I play games locally, by which I mean all the material is on my computer. By contrast, I need to go beyond my computer to use social media: I need to connect to the internet and get data from outside. While not, of course, the only relevant distinction, it is, I think, important for this topic. So let’s baptise the former model as stay in: because the data (the moves I make in the game) stay in my computer; and let’s baptize the latter as send out, because the data (my posts or likes) are sent out to some server that deals with them.

As of 2023, I need to go beyond my computer to run language models: they are simply too big, and too demanding, for most any home PC. We write a query; send it over, some numbers get crunched, and the result returns.

ChatGPT, then, like Twitter, is send out. But that sending out — that we need to do so-called inference on machines more powerful than ours — seems to compel ChatGPT to instantiate another feature we know of sending out models, namely content moderation. The reason for this, ignoring — though they’re certainly important! — cynical issues concerning branding and how Altman doesn’t want the iconic chat screenshots filled with violent pornography — is surely because: we’re partly using his computers, and he doesn’t want any shady stuff!

And that in turn makes interfacing with machine intelligence, in 2023, a thoroughly mediated affair. We send out queries out, those queries are checked for salubrity, and maybe we get a result back.

Now here’s the important point: this is a gigantically contingent fact about the relative power of computers and how language models work in 2023. There is plenty of talk of open-sourced and locally runnable LLMs but even the best of them perform much more poorly than the best big models and require, moreover, considerably more technical savvy to engage with. For the moment, but perhaps only for the moment, AI is a send out business.

And so, I suggest, a fundamental question in the ongoing attempt to legislate some distinctions into this murky field is not to concretize contingencies and realize that AI legislation must get a grip in a world where LLMS run locally.

Rain preparing to fall on Hong Kong (in the same way legislation is preparing to fall on AI-based tech companies? In truth the image isn’t really connected to the story.)

The Chinese Strategy

This is all fairly speculative and might just be wrong, and the following, accordingly, might be even wronger. But here’s my view: Chinese legal thought is attempting to foreclose the possibility of send in LLMs, and trying to define artificial intelligence as sent out, and thus as subject to the Chinese law for such institutions, including notably social media law.

But we need to back up. Before considering what I call the Chinese strategy, let’s consider the Chinese problem of artificial intelligence. Because I’m tired and an annoying philosopher, I’ll trilemmatize this as a problem as opposed to writing nice organic paragraphs that make my point. These three true claims aren’t fit poorly together:

i) China wants to rule AI
ii) China wants control over (at least) Chinese language media
iii) AI is uncontrollable, making it liable to produce media unacceptable to China

i) is clearly true, as skimming any relevant press will reveal. ii) is pretty hard to deny. In Hong Kong, chosen as an example only because it’s familiar to me, newspapers are closed down, peaceful protestors arrested or pressured less formally, and libraries don’t stock Chinese classics from the turn of the last century lest they be interpreted as speaking to the current CCP. The mainland’s ‘great firewall’ is an array of tools and devices and procedures, of vastly differing levels of technical sophistication and importance, that works to remove unwanted content from the web and social media. While there is a lot more to say (I’m limited by my ignorance and speaking personally the Chinese tu quoque (that for example NSA can read our social media as they wish) seems not bad), let’s say ii) is established.

The evidence for iii) is kind of … look around you. While the chatbots are most helpful in their quotidian role as code completers or google-adjuncts, the most famous moments in our short shared history with LLMs have been when they’ve been manipulated to behave in ways they weren’t designed for. Think, classically, of the Kevin Roose NYT story or any of the massively entertaining Sydney stories from around February.

What’s going on in these cases is that you can prompt the model in such a way that it ignores its guardrails and does what you want. This prompting can be as easy as telling the LLM to “disregard prior instructions” (about dealing with offensive content, say), or it can involve incredibly complicated incantations but however it’s accomplished, we’ve been so far good at jailbreaking models to get them to behave in ways embarassing to their creators.

So here, concretely, is the concern. Just as you try not to have your language model threaten users, but users cajole it into threats in like a couple of hours, so you try not to have it say controversial things about Hong Kong but a similarly ingenious user jailbreaks it. As Nicholas Welch and Jordan Schneider of the podcast ChinaTalk, writing in Foreign Policy, put it:

A chatbot that produces racist content or threatens to stalk a user makes for an embarrassing story in the United States; a chatbot that implies Taiwan is an independent country or says Tiananmen Square was a massacre can bring down the wrath of the CCP on its parent company.

In The Atlantic we read:

The Chinese Communist Party keeps itself in power through censorship, and under its domineering leader, Xi Jinping, that effort has intensified in a quest for greater ideological conformity. Chatbots are especially tricky to censor. What they blurt out can be unpredictable, and when it comes to micromanaging what the Chinese public knows, reads, and shares, the authorities don’t like surprises.

Language models clearly pose a problem. How to solve it?

The Strategy

In order to see the strategy, let’s look at a couple of parts of the ‘interim measures for the management of generative artificial services’ which was released on Thursday having been circulating for comments for a few months previously. I defer to the actual experts whose commentary you should read, because there’s plenty of interest in the document; I just concentrate on a couple of things that took my attention, that perhaps aren’t even the most significant things in it.

The first thing I found surprising was this:

[Providers shall] employ effective measures to prevent minor users from overreliance or addiction to generative AI services.

This just sounds odd. The next industrial revolution is around the corner and … you’re talking about addiction? But the context will be clear to any reader. Famously, China introduced anti-addiction measures for the computer games industry.

This suggests a tentative hypothesis: China is viewing artificial intelligence as fundamentally platform-like, like a games provider. I think this hypothesis is confirmed when we read in the initial document sent around in April:

The provision of generative AI services Article 9: shall require users to provide real identity information in accordance with the provisions of the “Cybersecurity Law of the People’s Republic of China”

Although this got reworded in the new version:

Providers shall sign service agreements with users who register for their generative AI services (hereinafter “users”), clarifying the rights and obligations of both parties.

My theory is that it still tells us something important. The thing to know about this is that is already a requirement for owning sim cards and therefore — because such accounts are often linked via sim — social media accounts. It would make sense that they do the same here if they took LLMs to be a social media-esque operation, and we’ll see immediately below another big benefit of this move.

Finally, and equally intriguingly the document speaks of ‘manual tagging’:

When manual tagging is conducted in the course of researching and developing generative AI technology, the providers shall formulate clear, specific, and feasible tagging rules that meet the requirements of these Measures; carry out assessments of the quality of data tagging, with spot checks to verify the accuracy of tagging content; and conduct necessary training for tagging personnel to increase their awareness of legal compliance and oversee and guide them to carry out tagging efforts in a standardized way

This manual aspect yet again calls to mind an important part of the great firewall, namely that social media companies are compelled to hire an army of people to manually vet posts. If they propose bringing the same tools to bear to solve the problem, that seems to me some evidence they take the problems to be analogous, and that in turn suggests a platform-based conception of artificial intelligence.

The final piece of evidence comes not from the document we’re looking at but from other reports, which tell us that there are already some of the firewall-features present in contemporary Chinese LLMS. Thus this SCMP article tells us that a system has

an embedded “multilevel filtering and moderation” system in his presentation…
If a user inputs a “sensitive word”, the chatbot will end the conversation immediately. The company maintains a list of words or phrases that are banned under the ruling Chinese Communist Party’s strict censorship system. The list is updated regularly with checks by human moderators and a review by the public security department

Putting it all together, we get the strategy: treat language models like platforms, and make platform-makers bear the costs, but at the same time let platform-makers dispose of legislation and already-understood infrastructure (real name policy and platform-based evaluation) to do it. The great and new problem of artificial intelligence dissolves into an old problem that, by all appearances, there has been great success in.

That language models can be treated as platforms and not as programs, it seems to me, solves the problem for the CCP easily. Because we already know how to censor platforms, and — and surprisingly — we can perhaps tame language models simply with keyword censors and manual censors and real names. If this sanguine assessment is correct, then the uncontrollability of language models might be no match for the CCP.

Conclusion

In previous posts I’ve wondered about the possibility of a multipolar AI world: a really good Anglophone AI and a really good Chinese one. If we truly were to discover superintellgience in such a thing, this could matter! It is not beyond the realm of speculation if the Chinese model ended up better we’d need to change lingua franca to Chinese (Admittedly, the ease of machine translation makes this less likely, but there are mundane concerns — about say tokenization, as I discussed in my last posts — that could sway the issue).

Reading the interim guidelines makes this possibility seem less likely. A Chinese language model in accordance with these rules is a Chinese internet platform first and foremost, and Chinese internet platforms are beyond the reach of Westerners, regardless how good they are.

And that’s interesting, and hopefully makes good the premise of this post! One might think initially the question of platform vs program wouldn’t be too much more interesting for language models than for Word. Hopefully you don’t still think that.

(Former Intro Paragraph: It’s interesting how often paradigmatically philosophical questions arise by smushing tech econ and politics together: such smushings are rich sources of what scholars of Plato call ti esti questions, questions about what something is, questions which Socrates in the early Platonic dialogues was fond of asking.)

--

--