Sign in

Entering the era of enforceable AI governance

This article is authored by Avimukt Dar, founding partner and Anushka Narayan, associate, CMS INDUSLAW.

Published on: Apr 08, 2026 12:42 PM IST
Share
Share via
  • facebook
  • twitter
  • linkedin
  • whatsapp
Copy link
  • copy link

There is a growing belief that AI could drastically improve human life, not just by making businesses more efficient but by making people better off in real, everyday ways. Elon Musk claims that AI and robots could make everyone extraordinarily wealthy citing examples of superhuman medical care and better entertainment than anything we have today. This rests on an assumption as old as science fiction itself: That AI will serve humans without harming them.

AI (Photo for representational purposes only) (Unsplash)
AI (Photo for representational purposes only) (Unsplash)

At the recent India AI Summit, that optimism was present, but with a sharper focus on institutional design. India is no longer at the stage of asking whether it should adopt AI; it is grappling with how to regulate it responsibly. What emerged clearly from the Summit is that governance is an important precondition for the deployment of AI and accordingly has marked a shift from aspirational to enforceable AI governance. In order to genuinely harness the power of AI at scale, systems must have governance matrixes designed to work for people safely and fairly. Data protection, accountability, transparency and overall compliance must be built into AI systems from the get-go. This is particularly relevant in the Indian context, where the vision of AI for All has been central to policy conversation and the country’s approach in modern times has been increasingly utilitarian.

AI has been instrumental in the development of various sectors and is used in critical products and services ranging from credit underwriting to content moderation and even electoral discourse. However, widespread adoption has unfortunately also led to the rise of deepfakes, algorithmic bias, opaque automated decision making and large-scale processing of personal data. As India seeks to establish itself as a global AI leader, these risks have become impossible to ignore. Policymakers’ approach to regulation of AI has thereby evolved from broad ethical principles to enforceable mechanisms and accountability frameworks. India is now entering a new phase of AI governance, wherein the central question is no longer what responsible AI should look like, rather who will be held responsible when AI systems fail.

In 2018, NITI Aayog published the National Strategy on AI, highlighting AI’s potential to promote economic development and solve socio-economic challenges through inclusivity. The alliterative slogan AI for All was thereby adopted. Over the last decade, NITI Aayog has identified principles for responsible AI such as safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, and accountability. It also recommended a risk-based approach – the greater the harm, the greater the regulatory scrutiny.

While India currently does not have an AI-specific legislation, regulators have recognised that applying existing frameworks may not sufficiently address its unique nature. Historically, the law has often struggled to keep pace with the rapid evolution of technology. Given that AI is still an ever-evolving field, its risks are still not fully understood, thereby making it tricky for policymakers to come up with long term regulatory approaches.

The Ministry of Electronic and Information Technology (MeitY) has instead relied on the existing IT framework, issuing advisories in response to the rapid spread of misinformation and deepfakes. These clarified that intermediaries must not permit the use of AI models to host or disseminate unlawful content and emphasised that non-compliance could attract consequences under the Information Technology Act, 2000 and other criminal laws. While not binding in themselves, these advisories demonstrate the move towards real accountability, backed by potential liability.

In the financial sector, the Reserve Bank of India constituted the Framework for Responsible and Ethical Enablement of Artificial Intelligence Committee, which focused on operationalising principles and recommended issuing consolidated AI guidance by adopting a risk-based approach. The Securities and Exchange Board of India has proposed amendments to affix sole responsibility for consequences of AI use and made recommendations on responsible AI/ML usage in securities markets. In the telecom sector, the Telecom Regulatory Authority of India has called for binding legal standards instead of relying on self-regulation by industry which lacks enforcement. Entities deploying AI cannot simply contract out of responsibility for its outcomes. Regulators are focusing on embedded due diligence, reporting requirements, oversight mechanisms and graded liability approaches within their respective frameworks.

AI systems also use vast amounts of training data, including personal data, raising significant privacy related concerns. Without adequate safeguards, this runs the risk of collecting unknown amounts of personal data without an individual’s consent or even their knowledge. The Digital Personal Data Protection Act, while not AI specific, applies to fully or partly automated processing of personal data, thereby covering most AI-driven personal data use cases. Entities responsible for AI usage will have comply with obligations applicable for data fiduciaries such as seeking granular prior consent for specified purposes, implementing reasonable security safeguards and measures against personal data breaches, enabling the right to erasure. In this regard, enforceable guardrails for AI systems are beginning to take shape.

AI remains a national priority, most visibly through the IndiaAI Mission, which is guided by the vision of Making AI in India and Making AI Work for India, seeking to strengthen infrastructure, support AI based startups and help develop large datasets for training AI models. The idea is not to regulate AI by restricting its growth at the outset but to allow its controlled deployment while retaining the ability to impose more stringent obligations once systems scale or begin to materially impact consumers. AI governance is now becoming a core compliance and risk management function and is therefore important for companies to not only have the technology capability to build and deploy AI systems but to also demonstrate compliance.

Most recently, MeitY amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to specifically control how AI generated media spreads online. The revised guidelines dig deeper by focusing on proactive detection measures, including removal of unlawful AI generated content, suspension/ termination of user accounts, identification and disclosure of the violating user’s identity to a complainant, mandatory reporting requirements, and reasonable and appropriate technical safeguards. Additional obligations apply to significant social media intermediaries such as mandatory user declaration prior to publication, verification of user declarations, and labelling of confirmed synthetic content.

The evolution of AI governance in India is a steady move towards clearer allocation of responsibility, keeping consumer welfare and national security at the forefront. The current approach indicates that AI governance may not be confined to a single legislation or regulator but will emerge through the intersection of data protection law, sector specific regulations, and intermediary obligations. As AI systems scale, courts and regulators may increasingly examine whether automated processes (especially those that are used to make decisions about individuals) meet standards of reasonableness, proportionality and fairness. In that sense, enforceable AI governance is India is not just about managing technological risk but is about embedding accountability into all systems.

This article is authored by Avimukt Dar, founding partner and Anushka Narayan, associate, CMS INDUSLAW.