close_game
close_game

In revised AI advisory, IT ministry removes requirement for govt permission

Mar 15, 2024 10:04 PM IST

The new advisory issued on Friday supersedes the two-page note issued on March 1 on the due diligence to be carried out by intermediaries and platforms

NEW DELHI: The ministry of electronics and information technology on Friday revised its March 1 advisory to the largest social media companies companies in the country on the use of artificial intelligence (AI), changing a provision that mandated intermediaries and platforms to get government permission before deploying “under-tested” or “unreliable” AI models and tools in the country.

: Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" (REUTERS FILE PHOTO)
: Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" (REUTERS FILE PHOTO)

The new advisory issued on Friday supersedes the two-page note issued on March 1 on the due diligence to be carried out by intermediaries and platforms under the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

HT has seen a copy of the revised advisory.

“This advisory is issued in supersession of advisory eNo.2(4)/2023-CyberLaws-3, dates 1st March, 2024,” said the advisory with the subject “Due diligence by Intermediaries/Platforms under the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital media Ethics Code) Rules, 2021”.

In its new form, the intermediaries are no longer required to submit an action taken-cum-status report either but are still required to comply with immediate effect.

The obligations in the revised advisory remain the same but the language has been toned down.

In contrast to the point in the March 1 version that asked platforms to take the government’s “explicit permission” before deploying AI models, the new advisory said under-tested and unreliable AI models should be made available in India only after they are labelled to inform the users of the “possible inherent fallibility or unreliability of the output generated”.

It also said AI models should not be used to share content that is unlawful under any Indian law; intermediaries “should ensure” that their AI models and algorithms do not permit any bias or discrimination or threaten the integrity of the electoral process;

The intermediaries have also been advised to use “consent popup” or similar mechanisms to “explicitly inform users about the unreliability of the output.

The new advisory retains MeitY’s emphasis on ensuring that all deepfakes and misinformation are easily identifiable. Thus, it has advised intermediaries to either label, or embed the content with “unique metadata or identifier”. Content can be in the form of audio, visual, text or audio-visual. The government wants the content to be identified “in such a manner that such information may be used potentially as misinformation or deepfake” even as it has not defined what a “deepfake” is.

MeitY also wants this label, metadata or unique identifier to identify content as artificially generated/modified/created, and that the intermediary’s computer resource has been used to make such modification. “Further, in case any changes are made by a user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change,” the revised advisory said.

The advisory no longer bears language related to “first originator”.

The communication has been issued to eight significant social media intermediaries, the same intermediaries that were issued the deepfakes advisory in December 2023 and the now retracted March 1 advisory. They are --- Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), Twitter, Snap, Microsoft/LinkedIn (for OpenAI) and ShareChat. HT has learnt that no advisory was ever issued to Adobe. The advisory has also not been sent to Sarvam AI and Ola’s Krutrim AI.

The March 1 advisory was sharply criticised with many start-up founders calling it a bad move. Aravind Srinivas, the CEO of Perplexity, had called it a “bad move by India” in a post on X.

To be sure, a “platform” is a term that has neither been used nor defined in either the IT Act or the IT Rules, 2021. While the revised advisory also attempts to set up guardrails around large language models and AI models used by large social media platforms, the models themselves are not intermediaries or significant social media intermediaries, that is, social media companies with more than 5 million users in India.

There is significant ambiguity about whether an advisory is legally binding. On March 4, IT Minister Ashwini Vaishnaw, at an event had said, “This is not a regulatory framework. It is an advisory that you should test your model before launching.”

On March 7, IT secretary S. Krishnan had said that the advisory was issued because the generative AI tools were not operating in the same way across jurisdictions “as similar sensitive socio-political questions are answered very differently, depending on which jurisdiction they are asked in”. He said it was particularly crucial ahead of elections. “And clearly in a year, in a politically sensitive year, in a relatively surcharged environment, this can really cause issues,” he said.

The March 1 advisory said that non-compliance with IT Act and/or IT Rules would result in “potential penal consequences”. The revised advisory states, “It is reiterated that non-compliance with the provisions of the IT Act 2000 and/or IT Rules would result in consequences including but not limited to prosecution under the IT Act 2000 and other criminal laws, for intermediaries, platforms and their users”.

 

Get Current Updates on India News, Weather Today, Latest News at Hindustan Times.
See More
Get Current Updates on India News, Weather Today, Latest News at Hindustan Times.
SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Monday, March 24, 2025
Start 14 Days Free Trial Subscribe Now
Follow Us On