The generative power of large language models (LLMs) is reshaping the information ecosystem, enabling not only enhanced communication but also the production of misinformation at scale. This talk examines emerging patterns in AI-generated misinformation, its diffusion across platforms, and the communicative risks it poses to public trust, epistemic stability, and information integrity. Drawing on recent empirical studies, it analyzes diverse forms of synthetic content—from fabricated citations and deepfake narratives to algorithmically personalized propaganda—and evaluates the limitations of both human and machine detection strategies. It further outlines multi-level governance responses, including transparency-enhancing mechanisms and value-aligned interventions. The goal is to reframe misinformation not just as a technical problem, but as a socio-political and narrative challenge requiring interdisciplinary solutions.