India’s step into neuromorphic computing and the AI revolution
This article is authored by Brainerd Prince, director, Centre for Thinking, Language and Communication, Plaksha University.
Artificial Intelligence (AI) is set to create one of the largest market opportunities in history, with estimates placing its potential value between $3.5 and 5.8 trillion. Capturing a significant slice of this market could redefine national economies, acting as a powerful engine of growth for decades to come. For India, harnessing AI is key to achieving the vision of a Viksit Bharat by 2047.
While AI has long been a subject of fascination, it has also seen cycles of breakthroughs and disappointments. A closer look reveals a critical flaw — these breakthroughs come with enormous energy demands and costly, time-consuming training processes. If nothing changes, projections suggest that AI’s power needs could surpass global energy production by 2035, with profound economic and environmental consequences. This demands a leap in computing hardware that could be dramatically more energy-efficient than what we have today.
Why is this leap necessary? It comes down to the ageing von Neumann architecture, the blueprint for all computers over the last 60 years. In this model, computation and memory are separated, which slows down operations and guzzles energy. For tasks requiring billions of calculations per second, like those used in AI, the von Neumann design has become a major bottleneck. What’s worse, the data we generate and use in AI systems is often stored by large corporations, raising privacy concerns.
The solution might be closer than we think—inside our own heads. The human brain, weighing less than two kg and consuming just 20 watts of energy, is capable of performing billions of operations per second, all while seamlessly storing and processing information in the same place. This extraordinary efficiency has inspired a new approach to computing, one modelled on the brain’s neural networks.
The concept of brain-inspired computing isn’t new. In the 1980s, visionary American engineer Carver Mead laid the groundwork for what could become the future of computing. Fast forward to the 2010s – industry giants like Intel and IBM reignited interest in brain-like computing. With advanced fabrication technologies at their disposal, these companies attempted to mimic the brain’s learning processes using traditional binary transistors and software-driven systems. Unsurprisingly, this brute-force approach failed.
The lesson was clear: To come anywhere close to the brain’s computational efficiency, we need to reimagine computing with new circuit elements that can learn and adapt like biological neurons and synapses. We also needed to rethink the entire computing architecture, moving beyond the limitations of von Neumann systems where memory and processing are separated.
The race to develop brain-inspired computers is not just about mimicking the brain’s processing power — it’s about doing so with the same energy efficiency and compactness that make the brain so remarkable. The question is: can we build machines that are as smart and efficient as the human brain? The challenge lies in creating computing systems that can store information in thousands of states and operate at the edge of chaos, just like the brain.
In a groundbreaking study published in Nature, a team led by Dr Sreetosh Goswami from the Indian Institute of Science, Bengaluru, invented a revolutionary molecular neuromorphic platform capable of storing and processing data in an astonishing 16,500 states — leaving traditional transistor-based computers, which operate in just two states, far behind. By harnessing the dancing of ions within a molecular film, the team created a system that mimics the brain’s intricate method of data processing. The molecules and ions, wiggling within the film, generate a multitude of unique memory states. Each movement was mapped to a distinct electrical signal — essentially a computer that captures thousands of computing states excelling in both energy efficiency and space-saving potential.
The breakthrough doesn’t stop there. In a stunning technological leap, the team used their molecular platform to recreate NASA’s iconic Pillars of Creation image from the James Webb Telescope on a simple tabletop setup. What’s more, they achieved this feat 4,000 times faster and with 460 times less energy than a traditional computer would require.
With 14-bit precision, equivalent to 16,384 analog levels, this chip could transform fields ranging from Artificial Intelligence (AI) to scientific computing. Imagine training complex AI models, such as Large Language Models (LLMs), directly on personal devices like laptops and smartphones—a process that currently relies on vast server farms and invasive personal data collection by big corporations. This invention could bring AI processing to individual users, offering unprecedented data privacy and democratising access to advanced AI tools. This is arguably one of the most disruptive computing innovations to emerge from India, with the potential to position the nation at the vanguard of global technological advancements.
This article is authored by Brainerd Prince, director, Centre for Thinking, Language and Communication, Plaksha University.