Dialogue with a Nobel Laureate: Student Tea Talk series at PHBS | The Fifth Session
2025-12-02 17:44:43

The fifth session of the " Dialogue with a Nobel Laureate” was held at Peking University HSBC Business School (PHBS) on November 18, featuring professor Thomas J. Sargent, 2011 Nobel laureate in economics and honorary director of the Sargent Institute of Quantitative Economics and Finance at PHBS.

The tea session

Professor Sargent was invited once again to engage in an informal roundtable discussion with full-time master's students. They were joined by Professor Wang Pengfei, Boya distinguished professor of Peking University and dean of PHBS. The session was moderated by Associate Professor Shi Jiao, deputy director of the Sargent Institute.

During the lively exchange, students raised questions on a wide range of topics, including Federal Reserve independence, conducting academic research, credibility in developing economies, and rational expectations theory in the age of AI, career preparation, and AI competition between China and the U.S.

Responding with a blend of humor and academic rigor, Professor Sargent addressed each questions thoughtfully—tracing connections from institutional design to technological change and from theoretical exploration to personal development. He emphasized that the core of economics lies in understanding how incentives, constraints, and expectations shape behavior and institutions.

He also encouraged students to ground their research in fundamental principles, maintain independent thinking and judgment, and view AI as a complement to—rather than a replacement for—their own skills.

Professor Sargent in discussion with students

Q: How much do you think a lessening degree of independence of the Federal Reserve might impact the long-run growth of the US? What’s the implication of that, potentially them losing their independence in making monetary policy without influence from the government?

Sargent: We can look at history. The United States did not have a central bank for its first 125+ years. Congress and the President directly managed monetary policy, but they disliked it because it often involved making unpopular decisions necessary for stability and low inflation. They created the Federal Reserve precisely to delegate this authority, creating an institution that could be blamed for tough choices. This delegation acts as a constraint on Congress itself. For example, if a politician wants lower interest rates but the Fed refuses, the government might then have to raise taxes or make other difficult fiscal adjustments.

If a President gains direct control over the Fed and forces it to pursue politically expedient but economically unsound policies (like excessive money printing), the result would likely be higher inflation. Crucially, the President would then have no one else to blame for the negative consequences. This undermines the very purpose of the institution. The key is to think about the system of incentives and constraints. The Fed’s independence was an institutional solution to a time-inconsistency problem, and weakening it risks returning to a less predictable and less stable policy environment.

Q: As a student, I just want to know how to start with a research problem. How do I find a problem to study? Also, how will AI tools influence our methods of research?

Sargent: This is a fundamental question we all face constantly: how to allocate our scarce time and intellectual energy. The dilemma of whether to focus on a “hot topic” like AI or on foundational principles is a real one. My approach is personal. I am drawn to fundamental, timeless principles. The books I keep returning to on my shelf are often decades old, dealing with small datasets but illustrating universal principles brilliantly. These principles—like incentives, constraints, and expectations—are applicable to any country and any time period.

Regarding AI, you must ask yourself a critical meta-question: Did you think of your question yourself, or did you rely on AI to suggest it? There’s no moral judgment here, but it’s crucial for self-awareness. The danger is that AI can become a substitute for your own thinking rather than a complement. I’ve seen this in teaching; students who rely entirely on external tools without engaging with the foundational material end up learning nothing. AI, at its core, is about curve-fitting and statistical learning—topics that are part of the very foundations we teach. To benefit, you must make AI a compliment to your skills, not a replacement. It should augment your ability to think, not take control of the thinking process itself. The goal is to use the tool to deepen your understanding of fundamental principles, not just to generate outputs.

Q: How should governments in developing economies think about building credibility over time, where households and firms are learning and may not trust official models and forecasts? What do central banks and finance ministries often get wrong about how real households and firms form expectations?

Sargent: This gets to the heart of expectations, credibility, and reputation. Let’s define our terms carefully. A “reputation” is not something you possess internally; it resides in the minds of others. It is their belief or expectation about what you will do in various situations. Your reputation is shaped by your historical behavior—people process this “data” to form a view of you.

Consider a simple example: I promise my wife I will be home at 6 PM to help with dinner. At 5:30 PM, a friend invites me to a bar. I have a choice: keep my promise or break it for immediate enjoyment. If I break it, my wife will update her beliefs about my trustworthiness. Now, the key is that I know she is doing this data processing. So, my decision is made in anticipation of how it will affect my reputation with her. This creates a dynamic interaction.

This is the pure theory behind credibility. It’s about a multi-period game with equilibrium. Governments and central banks are in precisely this game with the public. To build credibility, they must not only make promises but also follow through, knowing that the public is constantly watching and updating its beliefs. The consequence of breaking promises is that the public will adjust its expectations and behavior accordingly, making future policy less effective. What officials often get wrong is underestimating how intelligently the public learns from past actions. It’s not about complex models; it’s about consistent, predictable behavior that aligns with stated goals.

Q: As people are gradually getting used to using AI for decision-making, your Rational Expectations theory originally considers human rational judgment. How should the theory be adjusted, and what kind of stable state could eventually form?

Sargent: This question is deeply connected to the previous ones about AI. When you ask me these questions and I respond, I am essentially performing a version of human intelligence. I’m listening, grouping similar questions, and formulating a response—this is what AI also does, just inside a machine. The foundation of AI is statistics: it’s about choosing a class of functions (e.g., straight lines, polynomials) and a method to fit them to data (e.g., least squares).

Rational Expectations is a hypothesis about how people use information efficiently. The core definition of an “innovation” in information theory (by Turing and Shannon) is a surprise—something that could not have been predicted given the available information. If AI simply helps people process existing information faster and more accurately, it might make expectations more rational in the traditional sense, as people (or the AIs they use) become better at forecasting.

However, if AI starts to generate novel strategies or creates a complex, interacting system of AI agents, the nature of the “information set” and the process of learning could change fundamentally. The stable state would then be a new equilibrium in this human-AI hybrid system. The theory wouldn’t be discarded but would need to be expanded to model this more complex, multi-layered learning and expectation formation process. The key is to model the system, including both the human and AI agents, and their intertwined beliefs and learning rules.

Q: If we want to start our career or business in a few years, what kind of mindset should we build up during this time to face future challenges? How can we best combine the economics and logic we learn with the real business world?

Sargent: The mindset you need is precisely the one we’ve been discussing: the economic way of thinking. Great economists often say that at the core, it’s not that complicated. It’s about understanding incentives and constraints. In the social world, this includes the powerful force of people’s beliefs and expectations about each other, which create reputations and credibility.

You don’t necessarily need to learn an enormous amount of facts, but you need to learn how to think. Learn the fundamental, universal principles. The principles that help you understand my simple example about promising my wife I’d be home are the same principles that can help you analyze central bank credibility, international trade agreements, or the stability of a social safety net.

When you face a business problem or a policy question, boil it down. What are the incentives? What are the constraints? What are the beliefs and expectations of the different actors? How do their actions interact in an equilibrium? This framework is incredibly powerful and universally applicable. The economics you learn provides the logical structure to analyze these situations systematically, moving beyond anecdotal or superficial explanations. This analytical rigor is what will give you an edge in the business world.

Q: What do you think is the next big innovation?

Sargent: Well, before answering that, let me ask you: what do you mean by innovation? If you ask a statistician what “innovation” means, they’ll point you to two extremely smart people—Alan Turing and Claude Shannon—who basically invented information theory.

They needed a way to define “information” from data. They knew a tremendous amount of statistics—more than almost anyone at the time—and they relied heavily on statistical ideas. What they came up with was this: information is a surprise, something you couldn’t have predicted based on your prior beliefs, your Bayesian prior.

Suppose you have a big data set and then you give me one additional observation. If that new observation falls exactly where I expected it to fall, then it doesn’t teach me much. It doesn’t count as information.

But if the new observation lands somewhere unexpected—if it surprises me—then that forces me to revise what I thought before. That surprise is what Shannon and Turing would call information, and it’s also what a statistician might call an “innovation.”

So what's going to be the next big Innovation in economics?There’s a nice example of what I mean. It comes from Henri Poincaré, one of the greatest mathematicians and physicists at the turn of the 20th century. He was a French scientist, and around 1900 people asked him: “Tell us what you think the major innovations of the 20th century will be.” And Poincaré said, very honestly: “I have no idea.” Then he went on to explain why. He said: imagine you were sitting in the year 1800. Think about the things that were discovered later—electricity, the unification of electricity and magnetism, the periodic table. In 1800, people didn’t even know enough to formulate the questions that would lead to those discoveries. They only had a handful of known chemical elements. They had no conceptual framework for imagining what was coming. So they were completely surprised. That’s the point: the biggest innovations are usually the ones nobody even knows how to predict.

You know what the problem is for you as a student? It’s actually the same problem I have. I deal with it every day, because I’m in the same boat as you.Here’s what I do: I don’t try to sit down and think, “I’m going to create something totally new,” or “I’m going to invent a brand-new tool.” That’s not how it works. What I do instead is read a paper—say, a paper by Pengfei Wang. The ideas in that paper aren’t new to him, but they’re new to me. So I read the paper. And a couple of things might happen: I might think, “This is really good.” But I might also think, “He could have done something a little different here. He didn’t realize that the tricks he used could also be applied to another, very similar problem.” And I happen to know that other problem. So maybe I start working on it. Or maybe I go talk to him about it. That’s what I do. And if I spent all my time worrying about what big important thing I should do over the next five years, I’d completely freeze up. It would shut me down.

Q: How did you come here?

Sargent: I came here for a couple of reasons.

First, over the years, I’ve attended several scientific conferences in China—really world-class ones—especially in fields related to AI and machine learning. And in the past year, we’ve also had two or three excellent conferences here at this school. Many of the scholars who came are doing frontier research in AI and machine learning.

They are selected by different Chinese academic groups, and when they come, we learn from them—about the foundations of machine learning and how these tools can be applied in economics. We had a wonderful conference just a couple of months ago, and this December we’re having another truly world-class one. So there’s a lot of intellectual activity happening here. That’s one reason.

The second reason is more personal. When I was 40, China was still a very poor country. Shenzhen basically didn’t exist. And yet, in just half of my lifetime, this extraordinary transformation has happened. Many “miracles,” really. And I want to understand how that happened.

China has something it calls “socialism with Chinese characteristics.” What exactly does that mean? Those “Chinese characteristics” aren’t something you can just look up in Confucius. They had something very specific in mind.

Back when I was 30 or 40, China had a strong ideological view about how to run the economy: private property is bad, private firms are bad, profits are bad, entrepreneurs are antisocial. That framework was firmly in place—and for many years. And in my own country, there are still people who believe that. The new mayor of New York agrees with some of that. So the idea hasn’t disappeared.

But here’s what I find fascinating: Despite being lifelong believers in that old ideology, Chinese leaders at the time looked around and said, “This isn’t working.” So they turned to data. They looked at China’s level of wealth and poverty, its scientific output, its industrial capacity—cars, machinery, all sorts of things. And then they compared it to countries like Germany, the United States, and also Singapore.

Singapore was especially influential. Around 80% of its population came from this region of China—poor, undernourished people who left here with almost nothing. And yet Singapore was beginning to grow rapidly. Deng Xiaoping went there, spoke to its leaders, and they told him: “If you adopt some of the economic principles we use, you’ll do much better. You already have the people.” And Deng Xiaoping said: “Let’s try it.”

That’s how the market economy started being introduced into China.

They also learned from Singapore about institutions—things like trust, governance, how to reduce corruption. Singapore became a kind of model.

All of that was learning from data. Big data, in a sense. And here’s the main point: Data alone doesn’t tell you anything. You need a model. You need a theory to interpret the data.

Every large language model, every big-data system—you name it—is always learning within the structure of some underlying statistical or theoretical model. Sometimes people don’t tell you what that model is, but it’s in there. And that’s something you’ll learn about in your statistics classes.

Q: What are your thoughts on U.S.–China relations, or the differences between the two countries, especially regarding AI?

Sargent: My starting point is that people everywhere are fundamentally the same. I don’t really think in categories like “Chinese” or “American.” People care about similar things. So if differences exist, they mostly come from different governments and institutions, not different kinds of people.

When it comes to AI, China is doing extraordinary work. In some areas, China is ahead. For example, the AI systems in some Chinese cars are extremely advanced. What impresses me is that these systems rely on the same mathematics economists use—optimal control, filtering, and even solving Hamilton–Jacobi–Bellman equations in real time.

That’s the key point: behind all the impressive AI applications is a very concrete set of mathematical tools. Machine learning terms—like “reinforcement learning”—may sound new, but at their core they’re just recursive least-squares algorithms designed for specific function classes. The models inside those cars are basically applying filtering and control theory while making predictions about the environment.

Driving is a good example: the system has to anticipate where pedestrians and other cars are going, forming expectations—exactly the kind of problem that economists think about. Sometimes it even becomes a game-theoretic interaction among multiple agents.

Over the last eight or nine years, I’ve been puzzled by what’s happening in global economic policy. Economists around the world overwhelmingly agree that trade wars are foolish and that tariffs are harmful. Yet a U.S. president said, “It’s easy to win a trade war.”

So we have to ask: Is he joking? If he’s serious, then he’s simply ignorant of economics. If he’s joking, then why is he doing it?

When you look deeper, you see what’s going on: the country as a whole loses from a trade war, but certain groups inside the country gain. Tariffs lower overall efficiency, but they protect inefficient producers from competition. Those protected firms—and their workers—value that protection and are willing to support the politician who gives it to them.

But from the standpoint of most economists, this is not a good way to organize an economy.

Historically, we know trade barriers are harmful because whenever tariffs go up, one of the first things that emerges is smuggling. Tariffs create profits for smugglers. So, in that sense, smugglers are the ones who “vote” for trade wars.

You guys have any follow up questions or questions that we have not, of course, Sergeant has asked.

Q: Richard S. Sutton(The father of Reinforcement learning) mentioned that large language models can’t reach real intelligence because they lack a model of the world and can’t learn from experience. Do you agree?

Sargent: The United States is betting enormous amounts of money—trillions—on scaling up LLMs. Much of that investment is financed by borrowing, and it’s being used to buy chips, build data centers, and train ever-larger models. The hope is that by making models bigger, they will somehow “jump” into general intelligence.

But these models still do mainly pattern recognition and analogy making. They don’t solve the subtle cognitive problems we need for real intelligence. And when you ask companies why they believe scaling alone will work, they can’t really explain it.

There is also Yann LeCun, a leading machine-learning scientist from France, now at NYU and formerly at Meta. He argues strongly that current LLMs are based on a wrong model of intelligence, and therefore no amount of money will make them reach AGI. He points out that even a cat or dog has forms of understanding LLMs completely lack. LeCun believes a fundamentally different architecture is needed.

This matters because the major U.S. tech companies—Amazon, Google, OpenAI, Anthropic—are all in a race doing essentially the same thing. It resembles a bubble: massive spending, same strategy, and same expectation of a breakthrough. Financially, it looks a lot like the dot-com bubble of 1999–2000. Historically, such situations end in consolidation—maybe one company survives, maybe none, depending on whether the underlying model is correct.

If LeCun is right, most of them will fail.

For your own research, this is a reminder: people will always tell you what’s worth working on and what isn’t. When I was young, well-known economists told me not to work on rational expectations—they thought everything important was already known. But I pursued it because I found it interesting and didn’t fully understand it yet.

That’s what research is. It’s risky and uncertain—Hyman Minsky once told me this when I was your age. He said: being a lawyer is safe. Being a researcher isn’t. Every idea you pursue is a gamble—you don’t know if it’s good, and even if it’s good, you don’t know if anyone will appreciate it. Some people love that uncertainty; others hate it.

But that uncertainty is the nature of academic work.

Q: Give some advice for students wish to pursue PHD

Sargent: Let me give you an example. There was a student here who got a PhD from PHBS. When he first arrived, he knew very little economics. But five years later, he somehow became a very good economist. He won’t admit it, but he even teaches me things now. He shows me papers, walks me through them, and he has read a lot in his field.

What I noticed is: he isn’t afraid of math or hard work, and he actually enjoys it. He has built up solid mathematical tools, and he’s not scared of reading any paper. Some papers even I hesitate to read, but he just dives in and discusses them.

Those are useful characteristics.

You do not need to be very good at math to do economics. I’m not good at math. You just need to be good enough. You don’t need A’s in every math class. You can learn the math you need later and still use it in economics.

Economics is not math. You can be a great mathematician and a terrible economist. Economics is about the kinds of questions we were talking about—reputation, incentives, behavior—things that are not always written in precise mathematical form. Some mathematicians become impatient and say, “Give me a definition.” But economics often starts before the definitions are formalized.

LATEST NEWS