PKU Financial Review: Today's interview is related to AI innovation and governance. The first question is that we know AI technologies like machine learning and block chain are really changing finance. In your view, are these innovations going to spread out power or will they end up making it more concentrated? And what's their real potential for tackling global inequalities, like helping people who don't have good access to banks?

Hélène Landemore :So first of all, I want to proceed by saying that I'm actually surprised that your finance journal is reaching out to me. That's really not my expert work. I would love to hear what about the work you felt was relevant to your world. I can see some connection which is that I theorize open democracy and sort of governance structure for a bunch of things that we consider political and collective choices, and all those are more inclusive, participatory and deliberative.
It sounds that it's harder to reform a system where there are already legacy institutions in place, (I feel like those are the kind of things you're interested in), which is global finance, which is happening in the far west of the world.
It's not quite a state of nature, but basically there's a lot of unregulated spaces where maybe something like the principles of an open democracy could help. I can't promise what you call decentralized finance and the block chain, etc., for empowering individuals above and beyond traditional banks and payment platforms that have accrued monopoly positions pretty much. And things are structured in a way in which there is very little freedom for individuals. All of a sudden, with these technologies, we were thrown back into a space where everything is possible again, where we could imagine peer-to-peer systems where citizens transact, save and even govern common funds without intermediaries. It's very exciting to see global governance structure for those things.
I think indeed that something like open democracy might provide something (i.e. global governance) because what we have right now is potentially a market, right? As with every market, you need a system to regulate it. But I'm not sure in the centralized finance we really have that yet. So this (i.e. AI) also makes people very free, but extremely vulnerable to manipulation, cheating, or scams, etc. So, you need to provide some kind of minimum insurance. Otherwise, it's just going to get too risky and only a few people are going to use them.
My view is that you cannot let AI and the use of AI in finance just be left to the market, to the corporations who produce these tools, or to the central banks or corporate boards or existing legacy institutions. Maybe what we need are governance structures that are layered on top of these existing institutions at the global level. Once those are embedded citizen participation, transparency and accountability, that would be amazing. And my go-to solution is the global citizens’ assemblies of hundreds of ordinary people who come together to think about how we regulate or structure these innovations in a way that's beneficial for everyone on the planet. I think that with the help of experts and the users of these technologies, they could come to very common and sensible solutions.
PKU Financial Review:Thank you. And we also wonder what do you think are the risks if AI technology advances faster than our ability to make sure it's ethical?
Hélène Landemore :Well, it is indeed a major worry at the moment. I don't know how much there is in the eyes. It's hard to tell through the fog of war in a way that we are in, but I do think there's a risk. I was talking to Joshua Bingo (name spelled as pronounced) a few months ago and he was really alarmist. He thinks that we have basically a window of two to ten years to figure out guard drills for AI development because there are all kinds of risks that are very likely to happen within that window and suddenly beyond it.
I wish we had learned a lesson from globalization because globalization was the foretaste of what we're facing now, meaning we launched this giant algorithm that our corporation is programed for maximum profit and maximum extraction. It took us 50 years to realize that it did a lot of damage to our working class and middle class in the west at least. It has damaged the environment and accelerated climate change.
Surely, we could have thought of ways to layer on top of our international system, a global governance structure that would supervise that and foster coordination, collaboration and cooperation among countries like China and the US and other general geographic entities like the EU to make globalization more slow, more inclusive and more equitable. But we didn't do that. In fact, now the solution is that people are retreating into their national spheres of sovereignty and we're facing deglobalization, which is not quickly chosen in a collective manner. So, there will be losers and winners. And none of this is fully thought through and I worry that we're doing exactly the same as we stick with the deployment of AI except that we're doing it with technology, with algorithms that are much faster and that are going to be potentially even more disruptive (if not destructive) of our existing equilibrium.
I am worried, but I'm also an optimist. The one thing that gives me a little bit of hope is that AI is the solution as much as the problem in a way, because we're never going to have global governance without aggregating, processing and structuring information that AI permits. We're not going to be able to deliberate at scale, communicate at scale, make decisions at scale in a way that is fully accountable and transparent unless we have those technologies. Actually, what I hope is that right now we're in the phase of acceleration of the technology, and hopefully soon enough, the public will wake up and demand a layer of governance on top of this. It's been brought up by different people. It was brought up by Sam Altman a couple of years ago when he talked about having a global deliberative process on AI governance or regulation. They launched this marketing program, which was very small and never went anywhere. But I thought this was the right idea. Recently I was part of a movement, sort of coalition of academics, activists, governments to organize this global deliberative process on AI governance. It hasn't taken off because the funding is not coming through. I don't think there's enough interest right now, but I think it's out there. It's ready to be mobilized if enough people with goodwill and money were willing to get together and get behind this project.
I think everyone sees we need some kind of global solution. By global solution, I don't mean necessarily a global government which sounds potentially dangerous, but some form of global governance. It could be a coalition of the will for the time being. I think that's what Joshua Bingo (name spelled as pronounced) is looking to as well, something that would bring together countries that actually might not be either China or the United States because both of them think they're going to win that army race. I don't know if they have incentives to collaborate with others to slow down the race, but I think the EU definitely is not leading this. They might have an incentive to coordinate with other countries to offer a model of governance of the eye that is more responsible, more ethical. And perhaps we eventually inspire the leading countries to join, because they'll see that it's actually better for all of us like it's a positive-sum game if we instead of competing with each other.
PKU Financial Review: Yeah, thank you very much. You mentioned about the EU’s AI act, and you also mentioned that the rules for AI are all over the map. Although finance is global by nature, the AI rules are very different. The U.S is hands-off approach but China is state-led model. They are quite different. Do you think that we can have some kind of approach to get countries to cooperate on this and could democratic ideas like creating global discussion forums, for example, help find this common ground?
Hélène Landemore :Yes, I absolutely agree. I think that Dan Wang is making waves here in the US with his book Breakneck and it compares the the regime form of the US to the rule of lawyers and the regime form of China to the rule of engineers. I think it's not good when rules are decided by just one type of people, whether engineers or lawyers have the blind spots in each case. There are the biases, and I think that what is really missing in every country is the voice of ordinary people in all the diversity. Democratic innovations, like the citizens’ assemblies in particular, can play a critical role. Imagine a transnational liberative body composed of randomly selected citizens from around the world. They would not necessarily be chosen based on nationality, but they could be chosen on the geographical coordinates weighted by the population density as they did in the global assembly on climate that was developed in 2021 as a prototype. And that group, let’s say 1,000 people, includes all kinds of diverse cultural, economic and political backgrounds. So you would have engineers and lawyers in there by default if the sample is large enough, but you would also have a lot of people like fisherman and seamstresses and nurses and Uber drivers, and those people’s lives are affected by the concentration of economic power in the hands of just a few tech oligarchs in Silicon Valley or somewhere in Shanghai. Why don't they have a say about what we are going with this? Why don't they have a say in designing the sort of guard drills for a beneficial AI, for an AI that's oriented towards the common good, towards the well-being and the flourishing of human populations as opposed to just the enrichment of a few CEOs here and there. (Again, these assemblies would have to be properly resourced and informed by experts.)
I do believe in human common sense across nationality, across geography, across culture, across religion, across race, across everything. So I think this is something that's worth trying, especially I don't see what our excuse is because the cost of trying something like that is truly peanuts compared to the money that's currently being poured into data centers and the hiring of laptop engineers, etc., to just to produce things that we don't even know for sure we need and that are making our environment even more fragile and our electric bills higher.
We need a moral compass. And this moral compass of fear is not gonna come from the lawyers or the engineers or the tech CEOs. It's going to come from the bottom from ordinary citizens who live what an ordinary lives. But I have the user experience of the things we've tried before like globalization, and now this AI army race. And I think that's what we need to surface on at the global level as well at the national level.
It's not like we have nothing to go by. We have evidence from multiple citizens’ assemblies around the world, from over 800 publics that have been conducted around the world. We know how to run these things. We know they can be scaled. I don't know why we're not doing it. I think what's missing is not the technology. It's not even the money. I think the money could be found. It's truly the political will. And so even more than AI, I'm worried about the lack of political will. Do we have political structure right now that are concerned about creative system, that are sustainable, that are trusted by the people? Have we run away from the plutocracy that are self-perpetrating and are now taking us down with them? That's the worry I have.
PKU Financial Review: We really enjoyed reading your article, “Fostering More Inclusive Democracy with AI.” And it makes us wonder: could democracy itself become a casualty of the AI revolution? With decentralized finance platforms using AI to automate things like lending and trading without any traditional banks involved, how do you think the governance should adapt to these changes?
Hélène Landemore :Yes, absolutely. As I said, we already saw during the globalization phase of human history, we became more connected through global markets, and we empower giant algorithms. Corporations are giant algorithms. What did these algorithms do? They felt if we wanted to keep maximizing profit and returning dividends to our shareholders and all that, what we should do is to capture local state governance and take their sovereignty away and just make sure they do not regulate us. And that's exactly what they've done.
I do worry that if we don't hurry up, AI, especially something that looks like a form of general intelligence, could have its own goals and its own goals won't be to foster democracy. It will capture governments to spend more money on data farms and use up all the electricity they can and to perpetrate their own ends. That's sort of a nightmare scenario. In that case, local governments would just become enslaved to these algorithms and just perpetrate their interests rather than the common goods. The democracy could be a casualty and down the line human beings.
I hope we have enough people of goodwill and with some clairvoyance who can anticipate that, and we will do something in time. But I think we've done it before. We've already thrown democracy under the bus of globalization to some degree. I wouldn't be surprised if we did with AI as well.