UNESCO report urges India to review AI risks, environmental impact
UNESCO report urges India to review AI risks, assess environmental impact and strengthen governance as AI adoption accelerates
New Delhi: A United Nations Educational, Scientific and Cultural Organisation (UNESCO) assessment of India’s artificial intelligence preparedness has recommended a nationwide review of artificial intelligence (AI)-related risks and a legal gap analysis to examine whether existing laws are sufficient to address emerging harms, according to a report released on Monday at the AI Impact Summit in New Delhi.

The national assessment, carried out under Unesco’s Readiness Assessment Methodology (RAM), evaluates India’s preparedness to implement the UN body’s 2021 Recommendation on the Ethics of Artificial Intelligence and sets out several policy directions for the government.
The report states that AI in India must be developed and deployed “in a manner that is ethical, safe, inclusive and accountable”, and frames the proposed measures as building on existing initiatives under the government’s flagship India AI Mission.
The India report is based on consultations with more than 600 experts including public officials, academics, start-ups, technology firms, civil society organisations and think tanks conducted between November 2024 and June 2025.
“This RAM is not merely a technical order, it is an analytical tool,” said Tim Curtis, director and Unesco representative for the Unesco Regional Office in New Delhi, adding that the body has conducted this assessment in more than 70 countries. Also present at the launch at Bharat Mandapam were IT ministry secretary S Krishnan and principal scientific adviser Ajay Kumar Sood. The summit continues until February 20.
Risk mapping and legal review
One of the key recommendations in the report is a proposal for a comprehensive AI risk-mapping exercise. The report says India’s growing use of AI across sectors brings “a growing range of complex risks” and warns that without a standard framework to classify and assess them, governance efforts could become fragmented.
It proposes that the AI Safety Institute, under the India AI Mission, undertake a cross-sector study to map emerging and existing AI risks and develop a shared taxonomy of harms. This could be supported by an AI incident repository to document real-world cases of AI failures or harm.
The report also recommends a legal gap analysis to examine how existing frameworks, including the Information Technology (IT) Act and the Digital Personal Data Protection Act, apply to AI-related risks. The recommendation comes after the IT ministry amended the IT Rules to require social media intermediaries to remove unlawful material within three hours, down from 36 hours, and mandated the labelling of AI-generated content.
IT secretary Krishnan called the report a “report card” on some of the initiatives taken up by the government such as the India AI Mission. “It would undoubtedly help to inform what we in India are attempting to do in the AI space. Fortunately, the way we have designed the India AI Mission is flexible and we can accommodate any mid-course corrections and changes we need to do,” he said.
Environmental sustainability is another key focus of the assessment, with the report recommending that the ministry of environment, forest and climate change undertake a comprehensive study on the environmental costs of AI systems, including energy and water use. It also suggests expanding environmental impact assessment frameworks to cover large-scale AI infrastructure such as data centres.
India has secured investment commitments worth $90 billion, a figure government officials have previously said is expected to double by the conclusion of the AI Impact Summit.
On data, the assessment highlights the need to strengthen the AIKosh platform to improve access to high-quality, representative datasets for AI development. It states that datasets must reflect India’s linguistic and socio-economic diversity to mitigate bias and improve inclusiveness..

E-Paper













