
San Francisco, CA – A recent post by prominent interviewer and commentator Dwarkesh Patel, titled "Thoughts on AI progress (Dec 2025)," has reignited discussions within the artificial intelligence community regarding the efficacy of current scaling paradigms in achieving Artificial General Intelligence (AGI). Patel's tweet, which simply asked, "What are we scaling?," linked to a detailed analysis questioning whether increasing model size and pre-baked skills are genuinely advancing AI towards human-like learning capabilities.
Patel argues that current AI development heavily relies on "mid-training," where extensive resources are spent creating environments to teach models specific skills, such as using software. He posits a fundamental tension: either models will soon learn autonomously, rendering this pre-baking redundant, or they won't, indicating AGI is not imminent. "Humans don’t have to go through a special training phase where they need to rehearse every single piece of software they might ever use," Patel stated in his post.
The debate centers on whether the impressive performance of large language models (LLMs) on benchmarks translates into broad economic utility and true generalization. Many experts, including those cited by Patel, suggest that while AI has made significant strides, it still lacks the "critical core of learning that an actual AGI must possess." This includes the ability to learn from semantic feedback, self-directed experience, and generalize across diverse, context-specific tasks.
Patel also addresses the concept of "goalpost shifting" in AI definitions, noting that models today surpass what many would have considered AGI a few years ago, yet still fall short of automating a significant portion of knowledge work. He suggests that the true economic impact of current models is "4 orders of magnitude off" from what would be expected if they were as capable as human knowledge workers. He forecasts "actual AGI," characterized by billions of human-like intelligences on a server capable of merging learnings, within the next 10 to 20 years.
The discussion highlights a growing divergence between the rapid advancements in AI capabilities and the more nuanced understanding of what constitutes true general intelligence and its societal integration. As investments in AI continue to surge, the industry grapples with the foundational question of whether current scaling strategies are leading towards a truly intelligent future or merely creating more sophisticated, task-specific tools.