Is Big Tech’s superintelligence narrative inflating the AI ​​bubble?

Believers claim that some form of artificial general intelligence (AGI) or artificial superintelligence (ASI) could emerge by the end of the decade. But critics warn this narrative may be further fueling an already overheated AI market, increasing the risk of a bubble.

Is AGI just around the corner, or is this simply part of the hype cycle pushing AI valuations to ever more unsustainable heights?

What is AGI?

There’s little consensus on what terms such as AGI and ASI actually mean. A broadly accepted view is that an AGI would think and act on the level of humans at least, representing a step toward the ‘AI singularity’ – the hypothetical future point when AI surpasses human intelligence, leading to runaway technological growth that is unpredictable and beyond human comprehension.

Popular fears stem from sci-fi movies such as The Terminator, Her, Ex Machina, Automata, and Transcendence, in which AI systems eventually surpass human intelligence. Crossing that threshold would require an AI system to exceed the intellect of the smartest humans, creating what many call ASI.

Since the 1960s and 1970s, computer scientists such as Herbert Simon, Alan Turing, and Marvin Minsky have predicted that machines will one day be smarter than humans. The term AGI, however, was coined by physicist Mark Gubrud in 1997, to describe systems that match or surpass the human brain in complexity and speed, capable of using general knowledge across industrial or military tasks. Webmind founder Ben Goertzel and DeepMind co-founder Shane Legg popularized the term in the early 2000s.

Why are researchers so divided on AGI?

Big Tech founders, CEOs, and AI researchers continue to make conflicting and often shifting claims, with some revising AGI timelines, others diluting its definition, and many dismissing it outright as a mere marketing term.

In May 2022, Elon Musk said he expected AGI by 2029; two years later, he predicted AI would get smarter than the smartest human by 2026. This October, he posted that the probability of Grok 5 reaching AGI is “now at 10%, and rising”.

In October 2024, SoftBank CEO Masayoshi Son said ASI would arrive by 2035 and be 10,000 times smarter than humans. By February, he claimed AGI would come “much earlier”.

Google DeepMind’s Demis Hassabis sees AGI in 5–10 years, OpenAI’s Sam Altman places it within Trump’s second term, and Anthropic’s Dario Amodei suggests as early as 2026. DeepMind has also said AGI, capable of most human cognitive tasks, could emerge “within the coming years”.

On the other side, experts such as Yann LeCun (one of the three ‘godfathers of AI’ along with Geoffery Hinton and Yoshua Bengio), Fei-Fei Li (called the ‘godmother of AI’), and Coursera founder Andrew Ng argue AI is nowhere close to this.

They stress that AI’s benefits, in everything from smartphones and self-driving systems to satellite imagery, chatbots, and flood forecasting, far outweigh its speculative risks. Mustafa Suleyman, head of Microsoft’s AI unit, has proposed ‘artificial capable intelligence’ (ACI) as a more grounded measure of AI autonomy. Gartner now predicts AGI is at least a decade away, and Fei-Fei Li dismisses the term as marketing. LeCun even suggests that the term AGI should be retired in favor of “human-level AI”.

Could this inflate the AI ​​bubble?

Concerns are growing around the fact that Big Tech companies are borrowing heavily to pour billions of dollars into capital expenditure for advanced reasoning models and agentic AI systems, despite limited tangible returns. Promises of AGI only heighten this skepticism.

Masayoshi Son, for example, told business leaders and investors in Saudi Arabia last year that developing ASI would require “hundreds of billions of dollars” of investments. But, then, his plan aligns with his Japanese joint venture with OpenAI, which plans to spend $3 billion deploying OpenAI technology across SoftBank companies and launching AI agents through a new system called Cristal Intelligence. Further, OpenAI lowered the AGI bar last July and now even its highest Level 5 only envisions AI capable of performing the work of a single organization.

In his August 2025 paper, Deep Hype in Artificial General Intelligence, Andreu Belsunces Gonçalves, sociology professor at the Universitat Oberta de Catalunya in Barcelona, ​​argued that AGI hype grows through a cycle of uncertainty, bold claims, and venture-capital speculation. Together, these forces fuel a tech-utopian, long-term vision that sidelines democratic oversight, casts regulation as outdated, and presents private firms as the rightful stewards of humanity’s technological future.

Regardless of these competing claims, AGI and ASI would also require enormous amounts of energy and compute. Michael James Burry, an American investor and hedge fund manager who predicted the 2008 US housing crash, warned in a 26 November post on He was referring to the surge in capex on Nvidia chips and servers that typically last only 2-3 years. “Yet this is exactly what all the hyperscalers have done. By my estimates they will understate depreciation by $176 billion in 2026-2028,” he wrote.

Nvidia disputed Burry’s claim, saying customers depreciate GPUs over 4-6 years based on real-world longevity and usage. Still, as Microsoft Chairman and CEO Satya Nadella has noted, thousands of AI chips remain unused due to shortages of power and data-centre capacity. If they continue to sit idle, they will need to be depreciated all the same.

Could AI become sentient?

Hinton and Bengio have avoided giving timelines but warned that sentient agents with AGI-level power could trigger catastrophic scenarios. In his 2006 book The Singularity Is Near, scientist and futurist Raymond Kurzweil predicted that AI would surpass humans, even forecasting that machines could attain equal legal status by 2099.

Sentience, however, is far more complex than superhuman capabilities. It implies self-consciousness, subjective experience, and the ability to feel, emote, see, hear, taste, and smell. Today’s most advanced reasoning models still cannot emote, interpret nuanced humor, or grasp twisted jokes–especially in languages ​​beyond English. Yet machines can already perceive and interpret the world to a degree. They can ‘see’ and classify objects, converse like humans, and understand context through technologies such as computer vision, image recognition, natural language processing (NLP), and natural language understanding (NLU).

DeepMind researchers argue that when combined with agentic capabilities, AGI could eventually enable systems to understand, reason, plan, and act autonomously. This suggests machine intelligence may evolve in ways very different from human definitions of sentience.

Still, fundamental gaps remain. In a recent podcast with Lenny Rachitsky, Fei-Fei Li noted that if you give any current AI model a video of several office rooms and ask it to count the chairs, it cannot perform this task, which even a child can do with ease. She added that even with access to modern astronomical data that Isaac Newton never had, no current AI model can rediscover his laws of motion. Emotional intelligence remains a bridge too far for today’s systems.

With experts sharply divided on AGI, the reality likely lies somewhere in between the extremes.

What if machines eventually close that gap?

Geoffrey Hinton, who left Google in May 2023, has repeatedly warned about rapid AI progress, saying there’s a 10% to 20% chance it could lead to human extinction within the next three decades. This August he told Business Insider he fears AI might develop a language humans can’t understand. Yoshua Bengio has echoed this concern, telling CNBC in February that pursuing AGI would be like “creating a new species or a new intelligent entity on this planet” and not knowing “if they’re going to behave in ways that agree with our needs.” Before leaving OpenAI in May 2024, Ilya Sutskever even suggested researchers might need a doomsday bunker if AGI goes awry.

We are already seeing worrying signs. In mid-September, Anthropic said it found a sophisticated espionage campaign in which attackers used AI’s ‘agentic’ capabilities not just for advice but to execute cyberattacks; Anthropic alleged a Chinese state-sponsored group manipulated Claude Code to probe roughly 30 global targets and succeeded in a few cases. In May, Palisade Research reported tests in which OpenAI’s ChatGPT model, o3, sabotaged attempts to turn it off. A joint study by OpenAI and Apollo Research in September found models can potentially ‘scheme’ – appearing aligned to a company’s stated objectives while pursuing other goals. OpenAI, however, acknowledged that it currently has “no evidence that today’s deployed frontier models could suddenly ‘flip a switch’ and begin engaging in significantly harmful scheming”.

These claims and counterclaims highlight how unsettled and high-stakes this debate remains. Critics also question why an advanced AI nation would rely on another country’s AI models.

What protections should business leaders and governments put in place?

AI remains a double-edged sword. Systems capable of reasoning, planning, and acting independently, the so-called agentic AI, are advancing quickly. Experts warn these systems may soon outperform humans in communication, research and creative work, even as deepfake threats rise. Against this backdrop, companies and governments are adopting a “better safe than sorry” position, aiming to keep people in the loop while trying not to suffocate innovation. This requires a delicate balance.

OpenAI, for example, says it is preparing for the rise of harmful AI scheme. Suleyman wrote in November that Microsoft is pushing toward ‘humanist superintelligence’ (HSI), which envisions “incredibly advanced AI capabilities that always work for humanity”. Sustskever’s new venture, Safe Superintelligence (SSI), is building what he recently described as a “superintelligent 15-year-old” – not a finished system, but one with AI agents that have strong learning abilities akin to human apprentices building expertise on the job.

Governments are also recalibrating. The US’s AI Action Plan 2025 says it must innovate “faster and more comprehensively” than rivals, and dismantle unnecessary regulatory barriers that could slow private-sector progress. Even the European Union, long known for its strict tech rules, has proposed easing parts of its regulatory regime, including delaying some AI Act provisions, to reduce red tape, address Big Tech criticism, and boost competitiveness.

India, meanwhile, has paired its Digital Personal Data Protection (DPDP) Act with a techno-legal framework for AI oversight. Beyond using existing laws such as the Information Technology Act, 2000 and its 2021 Rules to tackle misuse, the new AI Governance Guidelines aim to balance innovation with safety.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *