0:00
/
Transcript

Will AI Create Permanent Dictatorships?

An Interview with Molly Roberts and Jennifer Pan

In his recent essay, The Adolescence of Technology, Anthropic’s CEO, Dario Amodei, discussed how AI might cement the status of dictatorships. He writes:

There is the possibility that authoritarian governments might use powerful AI to surveil or repress their citizens in ways that would be extremely difficult to reform or overthrow. Current autocracies are limited in how repressive they can be by the need to have humans carry out their orders, and humans often have limits in how inhumane they are willing to be. But AI-enabled autocracies would not have such limits.

Others are more optimistic about the possibility that AI and tools like Claude and ChatGPT might undermine dictatorship and empower citizens because they are difficult to censor. The Wall Street Journal recently ran an editorial “AI Is Bound to Subvert Communism.”

Will AI empower dictatorships to exert Orwellian level control over their citizens? Or will these tools sew the seeds of their undoing?

This is in some sense the most significant question we can be asking about the intersection of AI, democracy, and technology.

To answer it, I interviewed two of my oldest friends in the China field. Molly Roberts is a professor at UCSD, and is the author of Censored, a book on how China’s censorship and propaganda apparatus works. Jennifer Pan, her coauthor and collaborator, is a professor at Stanford, and she is considered the leading scholar of political communication in China. Both are tech-savvy, big data style researchers, who understand both how this technology is developing but also how its actively being deployed in China and places like it.

In our interview, we covered:

  • How AI will change the balance of power between citizens and regimes

  • Whether Chinese models suffer a cost for being trained on censored data

  • The cat-and-mouse game between dictators and dissidents

  • The hidden ways AI may empower citizens to push back

You can watch the full episode in the embedded player above, or anywhere you get your podcasts. If you are able to rate, share, and subscribe to the pod that is always appreciated, as it helps me grow my audience.

A lightly edited transcript is below. Thanks for supporting my work.

Rory

Share

Share The Civic Forum

Leave a comment

Transcript

AI and the Balance of Power Question

Rory Truex: So let’s just start with the biggest picture question possible, which is how do you see AI affecting the balance of power between these authoritarian governments and their populations?

Jennifer Pan: My view is that any new technology, including AI, is not a priori privileged toward either ordinary citizens or governments or other entities with more power and resources. I think we always think that they will be, and then expectations typically aren’t exactly what we think in the first place.

If we look back in social media, when social media first emerged, it was talked about as liberation technology. You have the Arab Spring, which seemed to fulfill that promise that this new technology of social media would allow anyone to produce information, allowing voices that were previously silenced to be expressed, making people more informed, potentially leading to revolution and democratization. I think now decades later, we know that social media does have coordinating effects, but it didn’t lead to democratization and the downstream political changes. At the same time, that technology did fundamentally change how information is produced in that it decreased the cost of production and dissemination of information and led to this explosion of information and audience fragmentation. And that change has more fundamental implications for the power of authoritarian regimes and citizens vis-a-vis them.

So if we look at AI, I think it’s now going to reduce the cost of information production to zero. So we have even more information. But I don’t know that we know the structural changes that will result that could change the incentives for both authoritarian regimes and individual producers and how that will play out. I don’t think there’s a priori one or the other. It depends on how these technologies will fundamentally change the structure of how information is produced and the incentives that it’ll create. And right now, I think even if LLMs create tons of information, the constraint is still attention. People’s attention is still limited. So even if there’s exponentially even more information, you still have to get people to pay attention to them.

Rory Truex: Molly, what’s your take on the balance of power question?

Molly Roberts: Yeah, I mean, I agree with Jen. Reading the Wall Street Journal opinion piece by Berg, I think we heard that a lot when social media came — that it would be democratizing, that this would only be good for democracy. And what we learned, I think, was it was both. At times it was used very much to organize and at times it was used very much to repress. And it depended on when it was used and how it was used and who was using it better on which way that went. And I think AI will be very similar.

So what is AI? AI is intelligence that’s sort of automated, quick, and outside of humans. And so this does allow more authoritarian technology in some cases. The government can collect tons of information and instead of having humans go through all of that information, they can automate it and figure out what’s going on quicker. Perhaps surveillance can be better. That might be a bit more efficient. It might reduce what we call the principal agent problem in surveillance where the government doesn’t have to rely on humans that might have different incentives. It could rely on a machine to produce that information. So there are ways in which I can see it can make autocrats stronger, but is that the full story? Definitely not. There are also lots of ways that civil society or individuals, activists can use that same technology to fight back. And we’ve seen this with any new technology from the printing press to social media to AI.

People figure out how the government’s using AI and how they’re surveilling and then we think about how people are adapting to that and saying, okay, well, I’m not going to do this thing because I know it will get sucked up and used in surveillance, I’m going to do this thing instead. It’s really a cat and mouse game. We always call that with censorship online, that the government’s doing one thing, people are reacting and vice versa. And so I think it’s going to be a very similar phenomenon as AI plays out.

Jennifer Pan: I totally agree with everything Molly said, especially the cat and mouse dynamic. But I think there’s still so much we don’t know about how the fundamental information dynamics will change. Let’s say authoritarian regimes do use these models to process information. Will that increase the efficiency of those efforts? I think that’s unknown because these same models are now generating so much more information. And at net, is it less costly to then sort through the information or because there’s more information, it’s actually more difficult. I have no idea how those dynamics will play out, but I just don’t think we know at this point.

Rory Truex: Well, thank you to both of you for those good academic nuanced answers. It also cheered me up a little bit because I’ve been a bit doom and gloom about this myself. And I had been concerned that basically the Chinese government in particular had invented its way out of the problem of revolution. So if the level of surveillance and information control reaches that Orwellian panopticon level, will they eventually no longer have to worry about the challenge of revolution? And I appreciate what you said about these things are inherently unpredictable. There’s a lot of uncertainty. There’s an adaptability — even with surveillance, we see citizens using very low tech tools like a face mask or makeup or certain things to subvert that. It isn’t just all or nothing.

I thought it might be useful to break this conversation down into the key pieces of how authoritarian regimes think about information and control it. So on the one hand, we have censorship, sort of the blocking of sensitive information. We have the dissemination of propaganda — information being put into the system by these governments. And then we have the monitoring and surveillance piece. And so I thought we could go through each one and think about how that’s changing in China and elsewhere and how that might look moving forward in five, ten, fifteen years.

Censorship

Rory Truex: Molly, why don’t we start with you on the censorship question because you quite literally wrote the book Censored. These LLMs, these chatbots — are these inherently more difficult to censor or is it just a matter of restricting the training data? Is it easy for the Chinese government to rein these sorts of tools in?

Molly Roberts: Yeah, I think that’s a really good question and one that we see playing out right now. I think that Jen brought up a really important point earlier, which was how does AI change the way that the information environment is structured? So right now, I think one thing that we’re seeing, although I don’t know how long it’s going to last, is we’ve had a sort of re-centralization of information. The best models — there are few of them, they’re centralized in certain companies, and everyone seems to be going to those models to get answers. And when did we see that last? We saw that last with mass media, where we had a certain number of channels before cable that you could tune into, and they would give you all of your information. And there was a lot of criticism at the time that these mass media organizations had too much power, that they had too much influence, and they were easily co-opted.

I think we see that a little bit with AI where China can block ChatGPT, it can block out these other models and then it can have control over the models that are in China. And in that way control what most people are using. Of course, you can have other tools that you can get access to, especially if you’re tech savvy, but for most people, they’re going to go to DeepSeek and use DeepSeek. And in that way, that centralization makes it easier to do censorship.

On the other hand, you might argue that in this new age of LLMs you have more information, you have better information. You can figure out how to verify and validate your own information. And if information holds governments accountable, then that might kind of work against those governments. But I think right now, when it comes to censorship, the centralization of AI is playing into the ability to censor and control it for the most part. And we’ll see how that plays out — it might be we have more competition between companies in the future. But centralization, I think, allows for more control.

Rory Truex: And Jen, you’ve done some work directly on these models with my own colleague, Xu Xu, where you’ve kind of looked under the hood a little bit and gotten a sense of how they react, for example, to sensitive requests. So can you tell us a little bit about how Western AI models perform versus models coming out of the Chinese system in terms of political sensitivity?

Jennifer Pan: Yeah, definitely. And I think it echoes what Molly is saying in that the Chinese government has passed regulation requiring all models that are being used within China’s domestic market to censor content. And these companies who are creating these models have done so. And we see a very clear change after the policy came out where these models now will not respond to questions that deal with topics and events and people that are censored within China. And the sort of delegated censorship model is something that the Chinese government has used with digital media more broadly. And we see it in large language models and AI companies as well. The government sets the standard for what should not be produced or disseminated and these companies have to comply. And we see from the cybersecurity administration regular reports on corrections that they’ve issued to Chinese tech companies.

Rory Truex: Do you think, Jen, that there’s a cost to these companies for the quality of the model from the censorship environment? Is it because they’re being trained on not the full range of human knowledge, or if they’re having to bake in these censorship directives into the model? Will that erode the performance of the model itself? And is there a cost for these companies for doing that?

Jennifer Pan: I would say in some ways the Chinese language digital information in simplified Chinese is already pre-censored and contains propaganda, which Molly’s research shows very clearly. So there’s already that going for these models. But we also do observe definitely at the user interface level, additional layers of censorship.

For example, when you’re interacting with DeepSeek and after a certain point in the conversation about some topic, the whole chat thread is deleted. It doesn’t even say, you know, let’s talk about something else — the whole record is gone. So that’s happening at the application layer. The censorship, how it’s being implemented, is at various levels.

Rory Truex: But do you think that will affect the ability of the model to perform a non-sensitive task?

Jennifer Pan: I think in many domains, these models perform really well. And we see that from the kind of evals folks have done with models — DeepSeek and Tianwen coming out of China — most of the time these models are doing tasks that are unrelated to politics and the things that are censored. I think the thing though that we don’t know is what exactly downstream consequences might be, because you could imagine something political popping up in a non-political domain.

For example, let’s say you’re going to China and trying to figure out which section of the Great Wall to visit. In one section, there is something related to Mao Zedong. And I’ve actually had the experience trying to help my relatives come to visit China. I was talking to DeepSeek and it erased all my information about visiting the Mutianyu section of the Great Wall because it got to Mao Zedong and deleted the whole chat. I was like, but I wanted to know the tickets and times. So that’s an example of something political popping up and disrupting non-political engagement with these models. We just have no idea right now how prevalent that is, both in a kind of user setting, as well as when we have lots of applications that are built on these open weight China models.

Rory Truex: Molly, do you want to weigh in on this?

Molly Roberts: Yeah, I think it’s a really interesting question. I think that they perform very, very well. If you think about scientific questions or other things that many of these models are being used for, I think we’ve seen them perform extremely well. I’ve also had the experience where using DeepSeek to analyze legal documents from China — it won’t answer, gets it wrong for certain things.

I think what’s interesting about that is something that Eddie Yang, who’s at Purdue, has written about, which is: say AI is being used for something like surveillance or something like censorship, and there’s censorship baked into that AI. So for example, say you’re going to use AI to decide whether a blog post is sensitive or not, but you don’t have enough training data about what is sensitive, either because people are self-censoring or you’ve censored the model — then you’re not going to be as good at doing censorship because of censorship. And so it’s like an undermining of the model. And what Eddie argues is that in crisis situations, this can be really bad because people are self-censoring, they’re not saying the sensitive things, there’s no training data to train this on. And then in a crisis situation, you all of a sudden see it and the model can’t censor it and is undermined. So I think that’s a pretty interesting point — if the government’s using its own AI, but the AI doesn’t do politics very well, then it might undermine that system as well.

Rory Truex: That’s quite fascinating. It’s almost like there’s this feedback loop that creates a blind spot — and ditto maybe even for surveillance. Because if you’re trying to train facial recognition to catch people who might be engaging in subversive activity, but there’s not enough dissidents in China at this point to reliably identify — it’s odd that because the system is so good at what it does already, that it might inhibit its ability to respond. That’s fascinating.

Propaganda & Flooding

Rory Truex: Molly, I want to talk about one of your key ideas in your original work and in your book, which was about this idea of flooding — which is sometimes the Chinese government will disseminate large quantities of information to kind of flood the zone, flood the system with distracting content, especially in times of crisis. As Jen already said, the cost of producing information is basically down to zero. It’s increasingly easy to produce fake information, deepfakes, things that look real but aren’t. So how do you see this flooding strategy changing, and projecting forward what you might think, how it might evolve?

Molly Roberts: Yeah, I think that’s an important question. We’ve certainly seen quite a bit of evidence that not just China, but other governments are trying to use AI for information operations. I expect that to continue. In some sense, there’s people all over the world using AI to generate information — now it’s cheaper, it’s quicker to do. And that kind of goes back to Jen’s point about how the information environment itself — AI is being centralized, but the information environment itself is just producing more and more information because it’s easier and easier to produce. We have a lot of evidence that AI is very persuasive — that AI writes more, even maybe more persuasively than humans.

And I think that is quite interesting and a little bit worrisome when you think about information operations and these types of models being used. I wonder if people, consumers of information, will be able to update to kind of filter out that persuasion effect as they get more and more used to AI in the information environment. But I do think it opens up lots of possibilities for governments and others, companies, other types of institutions to generate lots of information and make good information harder to find. And maybe that will make people more likely to rely on a model itself, an LLM, to give them information rather than social media or something else.

Jennifer Pan: On this point of persuasion — I’m actually not as worried about persuasion as I think a lot of other people have been. Because I’m actually not sure that those experimental effects of persuasion will pan out in the wild where people are getting so many sources of information. So during an election season, some of these experiments show that these models can persuade voters to vote in a particular way. But they’re experimental settings. If we’re actually in an election context, certainly lots of political candidates and parties are going to be using these tools to compete for our attention. And in that context, I’m actually not sure if we’ll see real world effects on turnout, vote choice, et cetera. So I think ultimately the constraint is still people’s attention.

But I do think a point that Molly talked about earlier — the potential re-centralization — is super important. Because if, let’s say, in the future, people use a small number of models to produce information, and we turn to those same models to mediate our knowledge of the world, then any actor that controls those models has a lot of sway. So that potential centralization dynamic I think is important to pay attention to. But in a world where everyone is using models and maybe a diverse set of models to compete for persuasion, I’m not sure if the dynamics will fundamentally change.

Rory Truex: What about customization? And I’m going to sound like I’ve been watching too many sci-fi movies, but this idea that a government could learn a lot about me and could figure out what propaganda I need at the right time to take my temperature down and dissuade me from engaging in collective action protests. It’s like the algorithm — we already have our algorithms, which are customized to us, but you can imagine those will be supercharged and the ability to demobilize people using these tools could be augmented.

Jennifer Pan: I think it’s a possibility. And I think it’s coming. What you’re describing is that there’s specific times when targeting to prevent action. What you’re describing is, let’s say people’s attention is on something — will demobilization be easier? I’m not sure I have a good answer to that. Although I wonder if it just changes the tactics of demobilization to be even less visible than they are now. I think they’re already increasingly invisible because we already have technologies that target repression much more precisely than in the past.

Molly Roberts: Yeah, I think it’s possible. The work that we’ve seen on this targeting so far hasn’t shown huge effects. But is it because we don’t have a big enough sample size? Certainly targeting — maybe it doesn’t have an effect on average, but for a few people who are really important, it could have an effect. So it could be important, but I haven’t seen the data to be like, this is really going to change a broad swath of the population’s ideas.

I also wonder — a government or another kind of bureaucracy gets very good at targeting a particular message in a news article or social media post to people, but are people going to be reading social media anymore or are they going to be going to AI to get their answers? And then it would be the AI doing the targeting, which would again — going back to Jen’s point — need to be co-opted. I think it’s going to be very interesting to see how people’s information consumption changes as we move forward and whether or not it becomes more or less social media or other types of media or just really to AI.

Rory Truex: Do you think there’s going to be certain tools in the toolkit that just become obsolete? You both have written about the so-called 50 cent army, which are these posters — often government officials — who are posting on Chinese social media to distract or disengage. It’s very human centered. Is that stuff even still happening or is it just all AI bots at this point? And if 40% of social media is just bots and you don’t even know if engagement is real anymore, will people just get off these platforms completely?

Molly Roberts: I think they will matter in the sense that they will feed the training data that will produce the next model. Even sometimes writing academic papers now, I feel like I’m feeding a model rather than humans.

I think it may matter because these models will suck up that production. We’ve seen this happening already strategically by certain governments — Russia trying to put a whole bunch of websites online to get it picked up in Common Crawl or whatever. Even just government websites are such a huge portion of the internet. I think that we will see that really mattering. And maybe it’s not manual anymore — I agree, I think we probably move away from being manual. That’s already happened to a large extent.

Jennifer Pan: On the kind of public side — Molly mentioned before there’s a question of whether people will discount what they consume. I think people already have been discounting even prior to AI. And increasingly, after you see something on social media, lots of comments are like, ‘Is this AI? Definitely AI.’

There’s an interesting paper by two economists in Econometrica — I think it’s called something like ‘Competitive Capture of Media.’ It’s about traditional media and political interests that are increasingly funding media platforms. The model is a theoretical paper. The prediction is that even a fully rational person would just learn less because we’re now just unsure what we can learn and what we can’t. But that seems like a very different prediction than we’re going to completely trust these models, that we will form parasocial relationships with them and do whatever they tell us. On the other hand, we’re not going to believe anything anymore and we can’t learn anything new. I don’t think it’s going to be very extreme, but right now reasonable people can reason our way to either of those outcomes. So it just shows the large degree of ambiguity over what exactly the downstream consequences of these models will be.

Rory Truex: And Jen, to your point, I think people are often more savvy than we might initially give them credit for. I did a paper that never got published — as is the case sometimes — but it was about the People’s Daily and how people in China think about the People’s Daily. We often think this is a propaganda outlet. My sense of Chinese citizens is they understand that it’s a pro-government propaganda outlet, but that they can kind of back out the bias. They understand the bias and can glean information from it regardless.

Surveillance

Rory Truex: I wanted to move to the third bucket, which is surveillance. And I think if we had to think of all of the different things where AI is a little bit scary, it’s this idea of mass surveillance. Andy Hall has this new paper called the Dictatorship Eval where he’s asking all these models to try to do authoritarian things. And he finds that eventually you can coax them into doing this stuff. But I think when we look at the Chinese system, this is the huge change — we saw some of these massive facial recognition systems unveiled in Xinjiang, and now they’re making their way to most urban areas. There are 400 million closed circuit television cameras. The idea that you’re being watched at all times, or you might be being watched at all times, I think has become now a common feeling in China. So on that piece of the puzzle, is the mass surveillance, the AI-enabled surveillance state — is that the big one?

Molly Roberts: I think it is something we have to worry about. I do think that governments were limited before AI by their inability to sort of recruit and train and monitor humans to do the task of surveillance. You think about humans reporting on their friends or other people in very totalitarian environments. And then in social media — certainly when Jen and I started studying censorship, this was mostly a human task with lots and lots of different people reading posts and seeing if they were objectionable. So I do think it’s something to be very concerned about.

I think we’ve also seen in China a shift toward more preemptive repression, which I also think is very important to think about — collecting data, especially on people who have a history of being a dissident, and following a lot of their moves and then trying to predict or to prevent them from taking action when they think they’re going to take action. I think that is also a really important distinction that is a little bit scary. If you think the government in China thinks about these actions as crimes, but if you think about this in a more broader setting where you’re trying to predict crime, it’s also something you could see governments around the world — not just authoritarian governments — doing. And I think that’s worrisome as well.

People update though. Dissidents, especially people who want to take action, they know how the government is using technology to some extent, and they figure out ways to get around it. And because of that, it actually makes the incentive for surveillance to get stronger — once you surveil a little bit, then it gets evaded, then you need to surveil more, then it gets evaded, then you need to surveil more. And at some point, people don’t like to be surveilled too, and there’s backlash.

I do think there, even though I agree for the most part that AI is very scary for surveillance, we have to think about how the population fights back and also when the population supports it — because we do see a lot of support for, for example, facial recognition because of crime in China. How do we figure out how to weaken the surveillance capability of AI? I think that’s a hard question. I really liked Andy’s Substack — a first step of trying to figure out how to audit AI for its ability to answer authoritarian type of questions. I worry though that a lot of those same questions could also be asked in a policing situation for real crime. So can you develop a system to monitor crime that then just is easy to extend to other types of activity? And it seems to me that some of the more traditional institutions that preserve democracy — elections, the rule of law, separation of powers — these things have no substitutes in AI.

Jennifer Pan: On the surveillance point, I just want to say that China had mass surveillance pre-generative AI. China already had pervasive cameras before large language models. And also, China was already doing preventative repression, including using analog methods to prevent people from engaging in what it would call criminal activity. So it’s not like everything is suddenly changing with generative AI. But then the question is, does generative AI make all those efforts more efficient somehow?

One: does it make it more efficient for countries that already were able to do it? Second: does it allow more authoritarian regimes or other political actors to do the same thing? So on the first point, maybe it does make it more efficient. But I think prior to generative AI, or in a less efficient world, where governments were already doing mass surveillance and preventative repression — at least in the China context, I’ve argued in my book that the incentives of government officials are such that they’re willing to tolerate a lot of false positives in order to avoid any false negatives. So that means they’re going to do mass internment and the cost of that is tolerated. Now, if you had more efficient generative AI that’s better at surveillance, does it mean there’s going to be fewer false positives? And what does that mean?

On the other side, does this allow more countries to do mass surveillance? One thing Molly said before is that now we see more countries than just political actors doing information operations. So we do see that LLMs are enabling information operations by more actors. Previously, it was only the very well-resourced governments that could do it. But on surveillance — similarly, it might enable more actors and more political actors, more governments to do mass surveillance at lower cost. But I think the one fundamental question is ultimately about who controls the models and who gets access to the model. Let’s say there’s a bunch of authoritarian regimes who start using China models to do mass surveillance. If at any point the Chinese government decides they want to limit this technology for domestic use, then all of those other governments have no access anymore. Similarly, if they try to use US models, the companies running those models could cut off that access.

And just a random point — last year when I was in China, everyone was wearing these full-on sun masks. I’m pretty sure for most people it’s vanity, but you see 90% of women walking around with only their eyes and mouth revealed. But those also prevent surveillance because you have no facial features that are revealed. I don’t think most people are doing that for purposes of avoiding surveillance, but it definitely has that consequence as well.

Rory Truex: Even your example about going to the Great Wall and that minor inconvenience of having your chat deleted and losing all the information on the ticket prices — people don’t like being surveilled. And even with the kind of fall of zero COVID, we saw that explosion of anger coming out of the Chinese population in part because of this false positive issue that you identified, Jen. In the Chinese system, let’s say one out of every hundred people is an actual dissident. And the model will have some error rate and it will misidentify five to ten people as such and repress them. So the surveillance creates grievance inherently.

There’s a great paper on this which I just saw today called ‘The Limits of Authoritarian AI.’ It’s in the Journal of Democracy. It’s by Jason Anastasopoulos and Jie Lian — the two Jasons. It’s about this issue, right? That the panopticon actually inherently creates a grievance and there is no right level. If you set the threshold for detecting someone too low, you’re going to miss the dissidents. If you set the threshold too high, you’re going to catch people the wrong way. And as you said, the Chinese system tolerates those false positives, but it also can create this explosiveness, which we’ve seen in recent Chinese history.

Molly Roberts: Yeah, I think that’s a really important point — in some sense, if it gets to be more accurate, it may create less of that backlash, which could strengthen the surveillance system. I do think, though, there is quite a bit of support in some cases for surveillance when it comes to crime. And then that is simply extended to these other types of political crimes. Or as Jen’s work has shown, these political crimes are then recast as actual crimes that people object to. When we think about mass surveillance — where is the support for it? And how do we figure out how to — I don’t want to say how to generate opposition to it, but in some sense, how to generate opposition to it. Because we don’t like it.

Jennifer Pan: Yeah, I think it’s really hard because it really depends on what the impact is on people’s daily life and what proportion of the population it affects. Because if for most people, the experience is that this mass surveillance has decreased street crime and they don’t encounter anything on the political side, then there’s not going to be opposition. I think what happened during COVID and 2022, zero COVID lockdowns and the kind of white paper protest movement was that for a large proportion of people, the lockdowns were affecting the quality of their day-to-day lives. And it was not a trivial share of the population anymore. And that’s why we are able to see that backlash. But I think if only a small proportion of the population experiences the negative consequences of the false positives, or if the positive experiences created by this technology outweigh the negative, then I’m not sure that we’ll see the sort of backlash to these technologies.

Citizen Resistance & Reasons for Optimism

Rory Truex: Okay, so we’re coming up on time and this podcast is generally a bit of a downer, but we try to have an arc to the episodes where we end on at least something resembling optimism. And so I wanted to talk about resistance and how some of these tools might be able to be used by the population to push back and assert their rights and voice. Do you have any instincts here? It’s all about what model you have and who controls it. And so if Chinese citizens were able to get access to models that weren’t controlled by the Chinese government — what do you see as the primary things and tools that might benefit citizens in these contexts?

Jennifer Pan: So I think actually one thing is that — maybe this trend will change — but a big distinction between models coming out of China and models coming out of the US is that the China models are open weight. And so once the weights are out there, people can do with the models how they like. So they could do things in terms of fine tuning or ablation that offsets some of the political constraints of these models. And I think that having those open weight models then gives people more power to use generative AI for their purposes.

I think there’s also, let’s say we’re in the world where all the models in China are closed models and only the platforms have kind of control or visibility over what’s happening and how they’re being developed and how those models can be altered — then I think it’s very clearly the centralization story. If those companies comply with the government, then it’s very hard to imagine a world in which people use these tools autonomously. But actually, the good news is that’s not what we see in China. China is following the path of open weight models, which I think is a really important point.

I think the other potentially — maybe this is not a positive point so much as a not-so-pessimistic point — is that especially around authoritarian politics and generative AI, these governments were already doing a lot of things pre-generative AI. It’s not clear that this new technology will automatically mean those in power have more at their disposal.

Molly Roberts: I think adding on to that — we think a lot about AI as replacing certain types of labor. So if we think about civil society in China, which has been largely squashed over the past 15, 20 years — could it be that people can use AI to replace some of what you would need an organization to do in the past? For example, sifting through government documents to find evidence that could be used in court in an administrative case to make against a local government, or to stand up for people, to find examples of corruption or other types of things that could hold government accountable.

And so the ways in which if AI can replace the labor of civil society and not require people to get in a room and sit down with each other — but you could have a few different agents doing that for you — could there be a way in which the government could be held more accountable for certain actions? Or where ordinary people who couldn’t sift through millions of documents before could do that and try to find ways to regain some more power?

Rory Truex: Well, you’ve both cheered me up. I came into this conversation pretty down and I think I’ve come away with a good nuanced view — optimistic about certain things, a little more pessimistic on certain things, but overall feeling a little bit better about the uncertainty of the whole thing and how this adaptation on both sides will unfold in the coming years. So thank you to Jennifer Pan and Molly Roberts.

You’re the people in the field I admire the most. It’s an honor for me to work with you and get to know you and interview you today. If you’d like to follow Molly and Jen, you’re better off just looking at their academic websites and reading their research. They’re both prolific researchers and really at the frontier of a lot of parts of social science and how it interacts with communication and technology. Thank you, Molly and Jen, for joining me today.

Discussion about this video

User's avatar

Ready for more?