Sign in

Artificial Generative Intelligence may be on horizon in 5-10 years: DeepMind CEO Demis Hassabis

What researchers have built, he acknowledged, uses some of the same principles as biological intelligence.

Updated on: Feb 19, 2026 6:25 AM IST
Share
Share via
  • facebook
  • twitter
  • linkedin
  • whatsapp
Copy link
  • copy link

NEW DELHI: Artificial general intelligence (AGI) could be on the horizon within five to ten years, but today’s AI systems remain “jagged intelligences” that are brilliant at some tasks and bafflingly poor at others, Google DeepMind chief executive Demis Hassabis, one of the foremost experts in the domain, said on Wednesday at separate events in New Delhi.

Demis Hassabis, chief executive officer of Google DeepMind, during an interview on the sidelines of the AI Impact Summit in New Delhi, India, on Wednesday, Feb. 18, 2026. (Bloomberg)
Demis Hassabis, chief executive officer of Google DeepMind, during an interview on the sidelines of the AI Impact Summit in New Delhi, India, on Wednesday, Feb. 18, 2026. (Bloomberg)

“We’re at a threshold moment where AGI — artificial general intelligence — is on the horizon,” Hassabis said at the keynote address on research at the India AI Impact Summit. He defined AGI as a system that can “exhibit all the cognitive capabilities humans can, including creativity, long-term planning, and things like that”.

But getting there will require solving fundamental gaps in current systems. He identified three critical shortcomings: the inability to learn continuously after deployment, the lack of coherent long-term planning, and above all, a stubborn inconsistency he called the biggest obstacle.

“Today’s systems are kind of like jagged intelligences. They’re very good at certain things, but they’re very, very poor at other things, including sometimes the same thing,” he said. Current models can win gold medals at the International Math Olympiad “but sometimes can still make mistakes on elementary math if you pose the question in a certain way”.

Also Read | Artificial general intelligence may be just 5 to 7 years away: Demis Hassabis

Hassabis — a trained neuroscientist who studied the brain’s hippocampus before co-founding DeepMind — said one lesson from the AI era is just how remarkably efficient the human brain is. Modern AI systems must ingest the entire internet to build an understanding of the world; the brain does not. “I see now how sample-efficient the brain is.

It doesn’t need to ingest the whole of the internet to understand things,” he said. What researchers have built, he acknowledged, uses some of the same principles as biological intelligence but “has been manifested in a very different type of system than probably the way the brain works”.

Also read: 'Yes, this is AI': Macron shares 'photo' with PM Modi with a note on friendship

To test whether a system had truly achieved AGI, Hassabis proposed an ambitious thought experiment: training a model with a knowledge cut-off of 1911 and seeing whether it could independently arrive at general relativity, as Einstein did in 1915. That, he said, would require the highest level of scientific creativity — the ability not merely to solve a problem but to identify the right question in the first place. “It’s much harder to come up with the right question and the right hypothesis than it is to solve the conjecture,” he said. “I think today’s systems clearly would not be capable of doing that.” Hassabis made his remarks across two appearances at the summit — a keynote session moderated by Balaraman Ravindran of IIT Madras, and a panel discussion featuring Alphabet CEO Sundar Pichai and SVP, Research, Labs, Technology & Society, James Manyika.

At the panel, Pichai described the current period as a once-in-a-generation inflection point, calling AI “the biggest platform shift of our lifetimes”. He said India was “uniquely positioned” to benefit, citing its talent pool and digital public infrastructure.

Also read: No reason to believe India halted Russian oil imports, says Moscow

On the technical path to AGI, Hassabis laid out a hybrid approach. He said the breakthrough will come from combining ideas pioneered at DeepMind in AlphaGo — techniques such as Monte Carlo tree search, which allow a system to think ahead by simulating possible future moves — with the broad world-knowledge already encoded in today’s large foundation models such as Google’s Gemini.

“We naturally need to combine the ideas we had with AlphaGo with today’s foundation models,” he said, acknowledging that the task is harder than in games --- AlphaGo is the first computer programme to defeat a human Go player --- because “you don’t have a perfect model of the world like the trivial transition matrix in games”.

In practical terms, Hassabis argued, a purely self-taught system that learns everything from scratch through reinforcement learning alone — as DeepMind’s AlphaZero once did for chess and Go — is not the fastest route to AGI. It is far more efficient, he said, to start with foundation models that have already absorbed vast stores of human knowledge as a kind of working model of the world, and then layer reinforcement learning and planning on top. “Foundation models like Gemini are going to be a critical part of the ultimate AGI solution, and then we’ll have lots of interesting reinforcement learning on top,” he said. Even as he outlined that potential, Hassabis urged vigilance. He flagged bio-security and cyber-security as the most pressing near-term risks and said that as AI systems grow more autonomous, the world will need a minimum set of internationally agreed standards to govern them. That effort, he cautioned, would demand sustained diplomacy.

“There is a societal challenge that will require international dialogue and ideally a minimum set of internationally agreed standards.”

His closing message drew the sharpest line between the technical and the political. Hassabis said he was confident researchers would eventually tame the technical risks of advanced AI. But forging the global consensus needed to manage its societal consequences, he warned, may prove the greater test. “I believe in human ingenuity, and I think we will solve the technical risks given enough time and brain power. But we need to do this internationally, and the societal challenges of that may actually end up being the harder problem than the technical ones,” he said.

‘Large language models lack true understanding’

Large language models (LLM) are a dead end on the path to human-level intelligence, former Meta chief AI scientist and deep learning pioneer Yann LeCun said on Wednesday at the India AI Impact Summit, offering a sharply contrasting vision to that of Hassabis.

LeCun, who quit Meta last year and launched a startup called Advanced Machine Intelligence Labs, said the industry needs to pivot to “world models” — systems that build simulations of reality using physics, sensory data and spatial properties. “If we want AI systems to understand the real world and approach human-level intelligence – not just in language, coding, or mathematics, but in everything – we need systems that truly understand the world at an intuitive level, like babies learning how the world works,” he said.

He dismissed the term AGI itself. “We don’t have general intelligence at all. Humans are extremely specialised… We think we’re general because we can only imagine problems we ourselves can understand.” Current LLMs, he argued, store vast knowledge but lack true understanding. “Agentic systems cannot exist without predicting consequences of actions, and LLMs cannot do this. So we need world models. I see no alternative.”

  • Binayak Dasgupta
    ABOUT THE AUTHOR
    Binayak Dasgupta

    Binayak reports on information security, privacy and scientific research in health and environment with explanatory pieces. He also edits the news sections of the newspaper.Read More

  • Nisheeth Upadhyay
    ABOUT THE AUTHOR
    Nisheeth Upadhyay

    Nisheeth Upadhyay is Editor and Chief Operating Officer at Hindustan Times Digital, where he is responsible for editorial strategy and growth, strengthening audience engagement and leading business functions. He began his journey as a journalist in the Hindustan Times newsroom in 2011, working closely with the print operations. In his first stint with the HT Media group, he worked as the Production Editor for the newspaper, coordinating production across desks, and planning the daily news schedule and long-term projects. He also worked as the Homepage Editor and Shift Head for www.hindustantimes.com, managing and editing the news sections of the website. During this time, he picked up skills in tracking and writing breaking news. He later worked at ThePrint as Editor (Operations), acting as a member of the core editorial leadership. His responsibilities in the digital-only newsroom included heading the Integrated Desk, the infographics and photojournalism sections and operations for Hindi, Tamil and Marathi languages. He also anchored two weekly video shows on YouTube, AISight and Everybody’s Business. Nisheeth’s work reflects a commitment to maintaining journalistic rigor while navigating the fast-evolving challenges and shifting opportunities in digital news media.Read More

Check India news real-time updates, latest news from India, latest at HindustanTime