New rules for living with intelligent machines
This article is authored by Aalok Thakkar, assistant professor, Computer Science, Ashoka University, Delhi-NCR.
For most of the history of computing, we have relied on a simple and reassuring principle: computers do exactly what they are told. A program is written, inputs are specified, and the output follows in a predictable way. If something goes wrong, the cause can usually be traced to a bug or a faulty instruction. This predictability has quietly shaped modern life. It supports our bank transactions, airline bookings, hospital records, and digital communications. We may not see the code, but we trust the systems because they behave consistently.

That certainty is beginning to change.
Today’s Artificial Intelligence (AI) systems are no longer confined to executing fixed instructions. Increasingly, we give them broad goals and allow them to figure out the path: Navigation apps do not follow a preset route but analyse real-time traffic, streaming platforms learn preferences, and LLMs interpret open-ended questions and generate responses rather than retrieving a single stored answer. These systems are not simply following scripts. They are interpreting intent.
The shift may appear incremental, but it represents a profound change in how we live with technology. Traditional software is like a recipe. Follow the steps and the result is predictable. Intelligent systems are closer to a junior colleague given an objective and asked to “handle it.” They analyse data, weigh probabilities, and produce an outcome that seems reasonable given what they have learned. The process is flexible, and often surprisingly effective.
This flexibility explains why AI is spreading so rapidly into daily life. It can adapt to changing conditions, personalise services, and detect patterns too subtle for us to notice. In healthcare, it assists in identifying anomalies in scans. In finance, it flags unusual transactions. In education, it tailors learning material to individual students. The benefits are transformative.
Yet flexibility also means unpredictability.
The central issue is not that intelligent systems will suddenly stop working. The more subtle concern is that they may work exactly as designed while producing outcomes that we did not fully anticipate. When we ask a system to maximise engagement, it may prioritise sensational content. When we train it on historical data, it may inherit historical biases.
We have already seen this dynamic play out in visible and consequential ways. In 2023, lawyers in the US relied on outputs from OpenAI’s ChatGPT to prepare a legal brief, only to discover that the system had generated entirely fabricated case citations. The model was not malfunctioning in the conventional sense. It was doing what it was trained to do: produce plausible, well-formed text based on patterns in data.
In another instance, Gemini’s image generation system faced public criticism for producing historically inaccurate or socially skewed images in response to prompts about specific events or groups. These systems had been carefully tuned to avoid harmful stereotypes. Yet in attempting to optimise for fairness and safety, they overcorrected. Once again, the systems were functioning within their design parameters, but the outcomes triggered debate about judgment, balance, and control.
Closer to everyday life, recommendation algorithms used by major social media platforms have repeatedly been shown to amplify extreme or emotionally charged content because such material drives higher user engagement. In each case, the systems operated as designed. The difficulty lay in the gap between narrow optimisation goals and wider human expectations. As intelligent systems become more deeply embedded in education, healthcare, finance, and public discourse, recognising and managing that gap becomes essential.
The first rule is visibility. We need transparency about how decisions are made. This does not mean that every user must understand complex algorithms, but institutions deploying these systems must be able to explain, review, and audit their behaviour. When outcomes affect livelihoods, access to credit, or public discourse, opacity undermines trust.
The second rule is boundaries. Intelligent systems should operate within clearly defined limits. There must be constraints on what data they can access, what actions they can take autonomously, and when human review is required. Guardrails are not obstacles to progress. They are conditions for safe progress. Just as traffic rules enable mobility rather than restrict it, well-designed constraints allow innovation without chaos.
The third rule is recovery. No intelligent system will be flawless. When mistakes occur, there must be mechanisms to pause, correct, and, if necessary, reverse decisions quickly. In a world where actions unfold at digital speed, the ability to intervene matters enormously.
Ultimately, living with intelligent machines requires a shift in mindset. For decades, we equated reliability with strict predictability. Now we must grow comfortable with systems that adapt and learn, while insisting on accountability and oversight. Control does not mean dictating every step. It means setting clear objectives, monitoring behaviour, and retaining the authority to intervene.
Intelligent machines are becoming woven into the fabric of everyday life. They will help us make decisions, allocate resources, and navigate complexity. The question is not whether we use them, but how we govern them. The new rules of living with intelligent machines are simple in principle but demanding in practice: insist on transparency, enforce boundaries, and always retain the capacity to take back control.
This article is authored by Aalok Thakkar, assistant professor, Computer Science, Ashoka University, Delhi-NCR.

E-Paper













