AI regulatory roadmap unveiled
A committee recommends using existing regulations for AI governance in India while addressing legal gaps, proposing a new agency for oversight and policy coordination.
A government-appointed committee has advocated using existing regulation to address risks arising from the use of Artificial Intelligence in India but has also pointed out the need for eventually reviewing laws and rules to fix gaps, particularly in the case of so-called intermediaries, while laying out a regulatory framework with a new apex agency for coordinating policy across ministries.
In its India AI Governance Guidelines report submitted to the Ministry of Electronics and Information Technology (MeitY) on Wednesday, the panel headed byheaded by Balaraman Ravindran, Professor at IIT Madras,said the AI Governance Group should serve as a permanent inter-ministerial body to steer AI governance efforts in the country. The AIGG will be headed by the Principal Scientific Advisor to the Government of India and have government agencies, regulators and advisory bodies as members. MeitY should be the nodal ministry for AI governance while respective ministries and regulators would be responsible for governance in their respective fields, the report said.
Establishing the AIGG as a nodal agency has been identified as a short-term priority in the committee’s action plan, along with conducting “a regulatory gap analysis and suggesting appropriate legal amendments and rules.”
In the medium term, the committee recommended, the government should “amend laws, as may be needed, to address regulatory gaps.” Over the long term, it advised the adoption of “new laws to account for emerging risks and capabilities”. However, as the situation stands today, officials said, the government is of the view that no new law needs to be brought in to regulate AI, with existing laws sufficient to mitigate emerging risks.
Nevertheless, the committee flagged the need to update a few laws such as the Information Technology (IT) Act, particularly the classification of intermediaries. “There is a need to provide clarity, especially with regard to how this definition would apply to modern AI systems, some of which generate data based on user prompts or even autonomously, and which refine their outputs through continuous learning,” said the report.
The report said under the IT Act, it must be clearly defined how liability will apply to AI systems. It noted that Section 79 of the Act gives intermediaries legal protection for third-party content as long as they do not create, modify or select that content. However, many modern AI systems generate or alter content autonomously, meaning this protection may not apply. The committee suggested clarifying how such systems are classified, what their obligations are, and how responsibility should be shared between developers, deployers and other entities in the AI value chain.
The report also called for a review of the Digital Personal Data Protection (DPDP) Act to address emerging AI risks. It highlighted the need to clarify “the scope and applicability of exemptions available for the training of AI models on publicly available personal data” and to examine whether the law’s existing data collection and consent rules align with how AI systems operate. The committee recommends a detailed review by the proposed AIGG.
On copyright, the report pointed to ongoing work under a different committee by the Department for Promotion of Industry and Internal Trade (DPIIT), which is examining the use of copyrighted material to train AI models. “The applicability of existing copyright provisions to AI-generated works and text-and-data mining for model training” may require clarification, the committee observed. HT has learnt that the seven-member DPIIT committee has already submitted its report to the department’s secretary for review.
The committee also recommended that the proposed AIGG, supported by a Technology and Policy Expert Committee (TPEC), review India’s regulatory framework for content authentication and suggest “appropriate techno-legal solutions and additional legal measures if necessary in order to tackle the problem of AI-generated deepfakes.”
Meanwhile, the recently established AI Safety Institute (AISI) be involved in testing and evaluating AI systems for potential risks, advise policymakers and industry on AI safety, and support ongoing work under the IndiaAI Mission on technical solutions such as machine unlearning, bias mitigation, privacy-enhancing tools and explainable AI, recommends the report.
Reiterating the government’s stance, MeitY Secretary S Krishnan said, “If there is to the extent possible, we will rely on existing legislation and existing measures. That is repeatedly what we have done at various points in time.” He added that even before the report was finalised, the ministry had acted on synthetic content, introducing light-touch regulation through amendments to IT rules that require AI tools and social media platforms to label AI-generated content. “People have a right to know that the content is synthetically generated,” Krishnan said.
“These guidelines should enable us to build a more adaptive ecosystem and a regulatory environment that allows innovation to thrive while enabling responsible AI,” said Ravindran, the chair of the committee. To get more people involved in using AI, he said “to integrate AI with the Digital Public Infrastructure for better governance and to incentivise MSME adoption and innovation in AI technologies so that AI doesn’t remain in the prison of large companies.”
E-Paper

