Research Paper Estimates 83% Probability of Artificial General Intelligence

Image for Research Paper Estimates 83% Probability of Artificial General Intelligence

A new research paper, "The Advent of Technological Singularity: a Formal Metric," estimates an 83% probability for the advent of Artificial General Intelligence (AGI). The paper, authored by J. A. Lara, D. Lizcano, M. A. Martinez, and J. Pazos from UDIMA-Madrid Open University, was recently highlighted by "Dr Singularity" on social media, who simply tweeted "paper" alongside a link to the study.

The research introduces a novel metric to objectively measure the current state of technological singularity, defined as the point where AGI surpasses human intelligence. Utilizing Bayes' theorem, the authors analyzed various "sorts" (categories) and "evidences" (milestones in AI development) to arrive at their probabilistic conclusion. This methodology aims to move beyond mere opinions, providing a quantitative assessment of AGI's likelihood.

The concept of AI singularity, where AI systems exceed human cognitive abilities, is a pivotal topic in global policy discussions. A white paper from the Digital Cooperation Organization (DCO), published in March 2025, emphasizes that this concept is no longer distant speculation but a "looming reality." It highlights AI's potential to revolutionize economies and societies, while also posing significant challenges such as workforce displacement and exacerbating global inequalities.

To navigate these complexities, the DCO white paper recommends strategic measures including public-private partnerships, the establishment of ethical AI practices, and substantial investments in workforce reskilling and education. These recommendations underscore the need for a human-centered approach to AI development, ensuring that technological advancements align with societal values and promote inclusive growth.

Recent advancements demonstrate AI's accelerating capabilities, with a June 2025 SingularityHub article noting AI's increasing role in scientific research, from protein folding to materials science and climate modeling. This rapid progress, while promising, also necessitates robust governance frameworks to address ethical concerns, such as algorithmic bias and accountability, as AI systems become increasingly autonomous.