As a new medium of communication, large language models (LLMs) learn values and attitudes from human-generated data, giving them the potential to capture and reflect public opinion. This talk will examine the capacity of LLMs to simulate public opinion from two perspectives. First, we will investigate the extent to which LLMs can represent public opinion across different nations and social groups, using a three-dimensional, interpretable framework that considers data sources, opinion distribution, and prompt language. Second, we will explore the potential for LLMs to simulate public opinion at the individual level using “silicon samples,” with a focus on understanding the underlying mechanisms of influence beyond empirical findings.