A recent social media post by AI researcher Yam Peleg has reignited the contentious debate surrounding the achievement of Artificial General Intelligence (AGI), with Peleg claiming that AGI was realized even before the development of advanced models like OpenAI's o3 and GPT-4. His assertion challenges the prevailing industry consensus that AGI remains a hypothetical future milestone.
Artificial General Intelligence is broadly defined as a hypothetical AI system capable of matching or exceeding human cognitive abilities across any intellectual task, demonstrating versatility, learning, and adaptability akin to human intelligence. While current large language models (LLMs) exhibit impressive capabilities, many experts, including those at McKinsey and IBM, maintain that true AGI is still decades away, citing the lack of consensus on its precise definition and the significant technical hurdles remaining.
Peleg's tweet directly states, "> This is true, AGI HAS been achieved. I would argue that before o3 and GPT-4. Consider this: Let's say you train an AGI. Before you train it, it's random, so obviously not AGI. When you are done training, it is AGI. When exactly during the training did it "become AGI"?" This philosophical query suggests that the transition to AGI might be a continuous process rather than a discrete event, implying that current advanced models could already possess general intelligence, albeit perhaps not yet at a "superhuman" level. OpenAI's o3 model, for instance, has demonstrated remarkable performance, scoring 96.7% on the 2024 American Invitational Mathematics Exam, showcasing advanced reasoning.
However, this perspective contrasts sharply with the views of many leading AI figures. OpenAI CEO Sam Altman, for example, previously made a jocular remark on Reddit about AGI being achieved internally, only to clarify it was a joke, emphasizing that such a monumental announcement would not occur informally. This highlights the industry's cautious approach to declaring AGI, often reserving the term for systems that can genuinely generalize knowledge and solve novel problems across diverse domains without specific retraining.
The ongoing discourse around AGI's definition and its current status significantly impacts public perception, research funding, and regulatory discussions. While some researchers consider current LLMs as "emerging AGI" or "unskilled human" level AGI, the broader scientific community largely awaits systems that unequivocally demonstrate human-level cognitive flexibility and problem-solving across all intellectual tasks. The debate underscores the complex nature of intelligence itself and the profound implications of its artificial replication.