First of all: what is artificial general intelligence (AGI)?
On the surface, that is an easy enough question. As yet unrealized, AGI is a type of artificial intelligence (AI) that will match or beat human intelligence across all cognitive tasks.
According to Elon Musk’s pithy definition, it will be “smarter than the smartest human”. Meanwhile, Demis Hassabis, CEO of Google DeepMind, Alphabet’s [GOOGL] AI research lab, frames it as “a system that’s able to exhibit all the complicated capabilities that humans can.”[1]
We are currently in a stage of ‘narrow’ or ‘weak’ AI. Such technologies enable a high-functioning system that replicates human intelligence for a dedicated purpose. Examples include chatbots, self-driving cars or recommendation engines, such as the one that powers Spotify [SPOT].
What narrow AI cannot do is extrapolate beyond its specific field of expertise, or make a decision that goes beyond its training data.[2] By contrast, AGI will be able learn and generalize. It will be able to perform a range of tasks with very little oversight, better and much faster than humans can. It will be able to consistently pass the Turing test.
So far, so sci-fi. But this is where it begins to get messy.
When Will AGI Arrive?
Beyond this broad definition of AGI, there are a plethora of different interpretations of what, exactly, it will look like. A recent paper produced by Google DeepMind states that, “if you were to ask 100 AI experts to define what they mean by ‘AGI,’ you would likely get 100 related but different definitions.”[3]
This has not stopped people talking about AGI as if it was a single, clearly defined thing. As Benj Edwards recently wrote for Ars Technica, “it may be impossible to create a universal definition of AGI, but few people with money on the line will admit it.”[4]
Similarly, people have wildly varying timelines for how long, exactly, it will take humanity to develop AGI.
Splashy headlines notwithstanding, it seems that a majority of industry insiders do not believe that AGI will be arriving any time soon. The Association for the Advancement of Artificial Intelligence recently asked 475 AI researchers if they thought scaling up current AI approaches would be sufficient to arrive at AGI. Some 76% of respondents thought it would be “unlikely” or “very unlikely”.[5]
In other words, a major paradigm shift needs to occur before AGI is even a possibility.
Again, entrepreneurs are more bullish than experts on this. AI Multiple Research found that most researchers are predicting it will arrive around 2040, while for entrepreneurs that figure is closer to 2030.
Musk, for example, sees AGI being realized by next year, while Nvidia [NVDA] CEO Jensen Huang gave a more conservative estimate of 2029.[6]
It is worth recalling, however, that predictions have shifted in recent years.
Current optimism is driven by rapid progress in large language models (LLMs) and expanding computational power. Still, current AI systems fall well short of the broad adaptability and independence that define human-level intelligence.
Other major hurdles remain, including immense resource demands and vague success metrics, not to mention stubborn ethical dilemmas.
In short, “Reports of the human mind’s looming obsolescence have been greatly exaggerated”, in the words of Gooder AI CEO Eric Siegel, writing in Forbes.[7]
Let’s take a look at the companies that are working hardest to achieve AGI.
OpenAI
As with most things AI, OpenAI is at the forefront of AGI.
Indeed, it’s the core of OpenAI’s mission. As such, Wired outlined earlier this month, it is the key to the disintegrating relationship with backer-turned-frenemy Microsoft [MSFT]. A central clause in the two firms’ contract specifies that, essentially, if OpenAI achieves AGI, it won’t have to share the tech with Microsoft.[8]
But how close is OpenAI to developing AGI? At the start of this year CEO Sam Altman wrote on his blog that “We are now confident we know how to build AGI as we have traditionally understood it.”
One significant development, announced earlier in July, is the Internally Guided Agent Framework, a system that enables LLMs to pause, think, act or refrain from acting, depending on the context.[9]
Anthropic
Another start-up with some heavyweight backing — in this case Amazon [AMZN] — Anthropic is likewise pushing the boundaries of AI, with an eye on the far horizon of AGI.
Its Claude 3.5 Vision+ model displays multi-modal, agent-like behaviors. A recent development was the launch of a Financial Analysis Solution built on its Claude 4 engine, which can help financial professionals make investment decisions, analyze markets and carry out research, CNBC reported.[10]
Back in June, Anthropic Co-founder Ben Mann said that a key threshold will be when AI can pass what he called the “economic Turing test”, a workplace trial in which a contractor and an AI agent would carry out the same month-long job. If the human supervisor thinks the AI did a better job, then it will have passed the test.
Mann thinks that AGI could drop by 2028.[11]
DeepMind
According to a recent blog post, Google’s DeepMind lab is “exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment and collaboration with the wider AI community.” AGI could “provide society with invaluable tools to address critical global challenges, including drug discovery, economic growth and climate change.”[12]
Back in March, CEO Hassabis said he thought AGI would take five to 10 years to emerge. He thinks the main challenge is developing today’s AI systems to the point where they can understand real-world context.
“The question is, how fast can we generalize the planning ideas and agentic kind of behaviors, planning and reasoning, and then generalize that over to working in the real world, on top of things like world models — models that are able to understand the world around us,” Hassabis said.[13]
xAI
Elon Musk’s AI start-up recently launched Grok4, the latest version of its chatbot, which it claims is “the most intelligent model in the world”.
However, the model quickly generated controversy when it came back with some problematic responses, among them that its name was “MechaHitler”.
It also seemed to be drawing on Musk’s own posts as a source. Explaining how this happened, xAI wrote that, “The model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI, searches to see what xAI or Elon Musk might have said on a topic to align itself with the company. [14]
Notwithstanding the controversy, Grok4 certainly displays some impressive capacities, as well as new levels of initiative and autonomous reasoning. “These are not just tools anymore. They’re becoming collaborators,” an xAI engineer said at the launch event.[15]
Meta
Earlier in July, Meta [META] announced it plans to spend hundreds of billions of dollars building huge AI data centers in the US.
The first site, Prometheus, in Ohio, will go live in 2026. One of the data centers will be a “titan cluster” the size of Manhattan, CEO Mark Zuckerberg wrote on Threads.
The move is part of Meta’s push toward what it calls “superintelligence”, which is seemingly the company’s own term for AGI. The firm recently announced the founding of Meta Superintelligence Labs, which will be headed by some of the sector-leading figures that Meta has hired in its ongoing recruitment blitz.
“As the pace of AI progress accelerates, developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity,” Zuckerberg wrote in an internal memo seen by CNBC.[16]
[1] https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html#:~:text=AI%20that%20can%20match%20humans,years%2C%20Google%20DeepMind%20CEO%20says&text=Google%20DeepMind%20CEO%20Demis%20Hassabis,smart%20or%20smarter%20than%20humans.
[4] https://arstechnica.com/ai/2025/07/agi-may-be-impossible-to-define-and-thats-a-multibillion-dollar-problem/
[7] https://www.forbes.com/sites/ericsiegel/2024/04/10/artificial-general-intelligence-is-pure-hype/
[9] https://ai.plainenglish.io/agi-gets-tangible-openai-and-xai-edge-closer-to-the-future-abb8da86b11c
[13] https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html
Disclaimer Past performance is not a reliable indicator of future results.
CMC Markets is an execution-only service provider. The material (whether or not it states any opinions) is for general information purposes only, and does not take into account your personal circumstances or objectives. Nothing in this material is (or should be considered to be) financial, investment or other advice on which reliance should be placed. No opinion given in the material constitutes a recommendation by CMC Markets or the author that any particular investment, security, transaction or investment strategy is suitable for any specific person.
The material has not been prepared in accordance with legal requirements designed to promote the independence of investment research. Although we are not specifically prevented from dealing before providing this material, we do not seek to take advantage of the material prior to its dissemination.
CMC Markets does not endorse or offer opinion on the trading strategies used by the author. Their trading strategies do not guarantee any return and CMC Markets shall not be held responsible for any loss that you may incur, either directly or indirectly, arising from any investment based on any information contained herein.
*Tax treatment depends on individual circumstances and can change or may differ in a jurisdiction other than the UK.
Continue reading for FREE
- Includes free newsletter updates, unsubscribe anytime. Privacy policy




