Yann LeCun Predicts LLMs Will Be 'Useless' Within Five Years at Seoul AI Symposium

Image for Yann LeCun Predicts LLMs Will Be 'Useless' Within Five Years at Seoul AI Symposium

Seoul, Korea – Yann LeCun, Meta's Chief AI Scientist and a recipient of the Turing Award, delivered a striking keynote address at the Global AI Frontiers Symposium on October 27, 2025, asserting that Large Language Models (LLMs) will become "useless within five years." Speaking in Seoul, LeCun advocated for a fundamental reorientation of AI research, urging a shift toward "world models" that possess the ability to comprehend and reason about the physical environment, moving beyond current text-centric learning paradigms. His bold prediction signals a critical juncture for the future trajectory of artificial intelligence development.

LeCun, a foundational figure in deep learning, articulated that contemporary LLMs are inherently limited in their capacity to manage intricate action sequences, engage in extensive logical reasoning, and guarantee control and safety. He firmly stated that "text alone cannot achieve human-level AI," underscoring the imperative for AI systems to acquire knowledge autonomously through sensory inputs, particularly video. This perspective directly challenges the prevailing emphasis on generative AI models, advocating for a transformative new framework in AI design.

The keynote, titled "Training World Models," was a central feature of the Global AI Frontiers Symposium, an event co-organized by Korea's National AI Research Hub and the Global AI Frontier Lab. Kyunghyun Cho, a co-director of the Global AI Frontier Lab and a professor at NYU, acknowledged LeCun's presentation on social media, noting: > "an opening keynote in seoul by @ylecun this image is a great example for computer vision and signal processing courses." Cho's tweet highlighted the technical and educational significance of LeCun's discourse.

During his address, LeCun introduced JEPA (Joint-Embedding Predictive Architecture) as a promising non-generative AI model designed to understand the world through visual data, including images and video. He also unveiled V-JEPA2, an advanced iteration focused on video learning, which has been pre-trained on over one million hours of footage and millions of images. LeCun posited that widespread adoption of JEPA-style architectures would instigate a profound shift in AI, transitioning from inference-based approaches to optimization-driven methodologies.

The symposium also featured contributions from other leading AI researchers, such as Yejin Choi of Stanford University, who explored the democratization of generative AI and the inherent limitations of scaling laws. This event represented a significant collaborative endeavor between Korean and U.S. AI research institutions, aiming to explore the current state and future prospects of AI while fostering international cooperation in the field. The collective discussions underscore a pivotal moment in AI's evolution, with experts like LeCun championing transformative research pathways.