PKU Financial Review: You have expressed reservations about the application of artificial intelligence (AI) and machine learning in risk management and financial regulation. Do you believe AI has the capability to overcome the challenges of "human nature" and "uncertainty" that you often highlight? Or, could it introduce new illusions that we do not yet fully understand?

Jon Danielsson : This is the fundamental question in AI research. The AI today are simply matching patterns and do not seem to have understanding in the way human beings do. In other words, they lack human nature and understanding, which of cause implies they have real difficulty with uncertainty in the classical sense. If, as some researchers predict, AI will surpass humanity, then of course it will be able to deal with all of these issues. But that then raises yet another set of questions that are very uncomfortable for us humans.
PKU Financial Review:You have been critical of traditional models due to their pro-cyclical nature. Would AI models, which learn from historical data, exacerbate this pro-cyclicality? For instance, during market booms, might algorithms imitate each other and collectively drive up asset prices? Conversely, during market downturns, could they synchronize sell-offs, thereby accelerating crashes and acting as a powerful amplifier of endogenous risk?
Jon Danielsson :I would agree with your assertion. Not only is AI so good in learning, reacting and making decisions. We also have a problem that we economists call strategic complementarities, which is significantly amplified by AI. And that means AI engines collectively will come to a decision either to buy or to sell which amplifies booms and busts. This synchronisation is indeed a powerful amplifier of endogenous risk.
PKU Financial Review:The output of AI is heavily dependent on its training data. If historical data inherently contains the errors, biases, and bubbles of past crises (such as the 2008 financial crisis), is AI doomed to learn and repeat these mistakes, potentially even more efficiently? How can we prevent the "garbage in, garbage out" dilemma from recurring in the age of AI?
Jon Danielsson :Currently there is no way to prevent this. The way to address it is to have a clear understanding of the limits of which type of decisions can be made solely based on past data on and which decisions require deeper understanding of the problems, understanding that AI would not have. That gives us a metric to identify where we can be comfortable in deploying AI and where we should be reticent in its use.

PKU Financial Review:Can AI comprehend "black swan" events? You have stated that "risk is unobservable." While AI may excel at handling "known unknowns," can it cope with "unknown unknowns" (i.e., black swan events)? Do you think AI has the capacity to simulate or anticipate a completely unprecedented financial crisis?
Jon Danielsson :Current AI cannot comprehend black swan events. By definition, such things happen in areas where we have no data no understanding. AI cannot magically get data about something for which no data exists. And therefore AI cannot cope with unknown-unknowns and we cannot use AI to simulate unprecedented financial crisis. If you ask it to do so it will simply hallucinate but what it will do is to project out of current data and will not be able to imagine what happenes in a new crisis. It is axiomatic that crises happen in areas we are not looking at and where we have no data on. Current AI has no ability to do better than humans here.

PKU Financial Review:In an AI-driven financial world, how will the role of humans evolve? Will ultimate decision-making authority be entirely handed over to algorithms, or should humans serve as the ultimate supervisors and ethical gatekeepers of the system? What would your ideal human-machine collaboration look like?
Jon Danielsson :I do not want to project too far into the future but what we are now seeing is the model of human on-the-loop where AI is in charge of important processes in financial institutions and a human being is simply the boss of that AI. The challenge to people intending to work in the financial sector then is that not only will we need a lot fewer people to work in the sectors but also those who succeed will have to be both masters of the underlying business problem and managing AI. We will need few of these people and they will have to have exceptional talents, and likely will be compensated equally exceptionally.