The algorithmic trap: From feeds to the streets
This article is authored by Atul Rai, CEO, Staqu Technologies.
Psychologists have long known that the human mind bends easily to two forces. The ‘Mere Exposure Effect’ shows that repeated exposure to the same idea makes us more likely to like and accept it. The ‘Confirmation Bias’ reveals our instinct to seek out and trust information that already matches our beliefs. On their own, these quirks of mind explain why rumours stick and opinions harden.
But inside the architecture of social media, powered by AI-driven algorithms, they become supercharged. Platforms don’t just mirror our psychology — they weaponise it. Every pause, every like, every share is not only training the algorithm; it is training us in return, narrowing what we see, intensifying what we feel, and convincing us that those feelings are reality.
This cycle is no longer abstract. From Nepal’s streets to India’s communal flashpoints and even the US Capitol riots, the blend of human bias and machine learning has become a feedback loop with political consequences. What began as a quirk of cognition has been scaled into a system of radicalisation.
The recent unrest in Nepal is a vivid reminder. What began as scattered online grievances quickly snowballed into protests. Social media platforms, powered by recommendation engines, locked onto user outrage and amplified it. Within days, feeds became saturated with similar narratives, hashtags multiplied, and friend-recommendation systems clustered angry voices into digital mobs. Those mobs then spilled into real streets.
This wasn’t orchestrated by a single group. It was orchestrated by algorithms optimized for engagement. Every extra second of attention taught the AI what to promote next. In turn, that feed trained humans to be more outraged, more certain, more radical.
India, too, has seen how quickly one viral clip can harden opinion. From communal rumours on WhatsApp to polarising videos on Instagram, digital content spreads within tight-knit communities at lightning speed. When your entire digital circle is fed the same clips, it no longer feels like “one perspective.” It feels like reality.
Here the inversion is stark: We think we are training the AI with our preferences. In truth, the AI is training us, nudging us on what to believe, who to distrust, and even when to act.
The US Capitol riots of January 6, 2021, is another case study. Extremists may have sought radical content, but recommendation engines accelerated their visibility and amplified their sense of collective strength. In Europe, far-Right and anti-immigrant networks grew in much the same way, fed and clustered by algorithms. Large-scale studies complicate the picture: they show that algorithms don’t necessarily radicalise average users. But for the already vulnerable, these systems are accelerants. And once groups are clustered, beliefs intensify. In short: People may light sparks, but AI fans the flames.
Traditionally, “training AI” involved feeding it labelled data and refining it through feedback loops. Today, recommendation engines use that same model on humans—reinforcing content you engage with by showing it more often, gradually pruning away alternative perspectives, and conditioning communities to form around a narrow set of ideas that then feel universal. It’s essentially the same logic of reinforcement learning, only this time turned back on its creators.
With over 800 million internet users and a demographic skewed young, India faces perhaps the highest stakes. A nation where politics and religion are deeply emotive cannot afford to let AI systems, built in Silicon Valley, shape its public imagination unchecked. Already, the IT-BPM sector makes up 7.4% of GDP, much of it dependent on US contracts and platforms. But the bigger risk is social: What happens when billions of micro-decisions in Palo Alto start nudging beliefs in Patna or Pune?
If Nepal serves as a warning, India must act on the lesson by proactively building safeguards into its digital ecosystem. This means designing diversity into feeds to ensure exposure to credible, cross-cutting perspectives; adding friction to virality to slow the wildfire spread of sensitive content; auditing algorithmic clustering to prevent the creation of ideological silos; and educating users that every click not only teaches the AI but also reshapes what they will be taught tomorrow.
The great illusion of our time is that we are training machines. In reality, machines are training us, to react faster, to believe harder, to belong to digital herds. Nepal’s unrest showed how quickly online conditioning can leap into the streets. India, and the world, should treat this as more than a technical issue. It is a civic one.
Because when AI trains humans, the question is no longer what kind of technology we build, it is what kind of society we are becoming.
This article is authored by Atul Rai, CEO, Staqu Technologies.
E-Paper

