AGI will improve life, but it still several major breakthroughs away: Stuart Russell
Like in other potentially risky technologies such as nuclear power or air travel, the approach should be that before you release the product in the market, you have to show that it’s safe, says Russell.
Artificial General Intelligence, the Holy Grail of the AI wave, is still “several breakthroughs away”, but while it may improve “quality of life” for everybody, it raises the fundamental question of the “purpose of a human life”, said Stuart Russell, a professor at the University of California, Berkeley, one of the world’s leading experts on artificial intelligence

Speaking to Anirudh Suri on the podcast AI Futures: The Road to India AI Summit 2026, Russell also discussed the new applications of AI, and whether we are indeed in an AI bubble. Edited excerpts:
How has the field of AI evolved over the last year, since the Paris Summit? What are some of the big items you think are important to note?
The first big thing that has happened in the last year is the access to systems that can tackle more complex problems and answer more difficult questions than you could with the simple Large Language Models (LLMs). The second thing is this idea of the AI system as an agent, something that can do things in the real world. The flexibility of LLMs is unprecedented in the history of AI.
The industry seems to be seeking Artificial General Intelligence. How close are we to it?
AGI would be transformative. It would enable us to deliver the highest quality of living that we could imagine to everybody. But there are two big questions. One is if you create AGI, you’re creating entities that are more capable, more powerful than human beings. So how do you expect to maintain power forever over entities more powerful than ourselves? The other big question with AGI is even if we solve all that, what is then the purpose in human life?
If I look at the whole history of AI and how close we are to Artificial General Intelligence (AGI), my view is that we are still several major breakthroughs away. I have never bought into the scaling argument that says if we scale up the LLMs and the amount of compute, that will give us real intelligence. That is not correct.
The amount of data and energy these LLMs require are exploding. Are we again on an ecologically unsustainable path, like the Industrial Revolution?
Energy is one way in which this combinatorial explosion and data inefficiency in learning hits you in the face. If we are going to just brute force our way out of it by using more data and compute, it’s just an astronomical increase in the amount of compute.
Regarding the energy issue, the media is perhaps a bit overexcited. The numbers are exaggerated. The total data center energy consumption is around 2% of global electricity use. 10 - 20% of that is for AI usage, which is less than the amount we use for televisions. Water is another thing that people talk about, but these data centers are using at least 10 times less water than we use for golf courses. If you just look at the numbers, water is completely not something to be worried about.
Are we in an AI bubble?
Yes, the scale of the investment is vast – trillions of dollars, which is 50 times greater than the Manhattan project already. However, we are several breakthroughs away from getting where we think this project is headed. Unless those breakthroughs happen, the bubble is going to burst because the technology we have now cannot produce the return that these investments are demanding.
Which applications of AI get you the most excited?
AI could be extremely beneficial for scientific research. The most significant advance is probably AlphaFold, for which John Jumper and Demis Hassabis won the Nobel Prize last year. In addition to materials discovery where we’re starting to see significant improvements, healthcare is another area given we collect vast amounts of data from medical records of each patient.
The other area that I have high hopes for is education. We know that a good human tutor can dramatically improve on learning. We could build AI tutor systems feasibly for students at least till high school. And you might ask, why hasn’t that happened? Why aren’t we seeing significant improvements in the quality of education in regions where it is lacking? One of the reasons is that it is hard to make money in that business. It is a very difficult market and the investment has to come from the philanthropic sector and governments.
How are different countries evolving their AI strategies?
China’s strategy has shifted from direct competition with the US, to a strategy where they’re finding places in healthcare, education, public sector, private sector, where AI can naturally deliver value, can make our economy more productive and make citizens more productive and improve the quality of their lives as opposed to trying to create AGI.
The US views the race to create AGI as the race to the moon in the 1960s. The Indian government instead favors this idea of where we can apply AI instead of being in a race. China and the US need to stop thinking about this as an arms race. Whoever gets AGI first, everyone loses because we don’t know how to control systems that are more intelligent than human beings.
How are you hoping that the Summit will move the needle on safety?
When you look at other areas with potentially risky technology like nuclear power, air travel or medicines, the approach we take is to say before you can put that product in the market, you have to show us that it’s safe. We need to do the same thing with AI. Developers need to show that their systems are not going to cross those red lines. That’s an approach to legislation that could be effective.
It is also fallacious that regulation stifles innovation. It is actually failures that prevent progress - the failures of Chernobyl, the early jet aircraft that crashed set back aviation considerably, and so on. If you look at the regulations a restaurant has to comply with, it’s far more onerous than the regulations that we’re asking AI companies to comply with.
Give us your thoughts on how the AI Summit could help in starting to fundamentally rethink some of the conventional approaches in AI?
The summit is probably not the place where you’re going to convince companies and research institutes to try a different technical approach to AI. But it can be a place where we can shift the focus of what we’re trying to do with AI. How can we use this technology to deliver value in healthcare to improve the quality of health care, the quality of education, to jumpstart local businesses and entrepreneurial activity?
Anirudh Suri is a venture capitalist, host of The Great Tech Game Podcast and a nonresident scholar with Carnegie India.

E-Paper













