Nasscom pushes back against DPIIT plan
DPIIT only wanted to carry a summary of Nasscom’s dissent, but upon the trade body’s insistence, the full 24-page written submission was added as an annexure to the report, HT has learnt
Two days after the government released its working paper proposing a hybrid model that mandates a blanket licence for AI training on copyrighted content, India’s IT industry body Nasscom and several big tech firms have pushed back sharply, calling the proposal difficult, if not impossible, to implement in practice.

While most members of the committee set up by the Department for Promotion of Industry and Internal Trade (DPIIT) backed the model, which would require AI companies to pay creators and would not allow rights holders to opt out of AI training, Nasscom has formally dissented, citing technical, economic and enforcement challenges.
In its written submission to the committee, Nasscom, from which two members are part of the DPIIT committee formed in April, said India should allow Text and Data Mining (TDM) for AI training as long as the content is accessed legally. The apex IT body added that creators should be able to block AI training either through a machine-readable opt-out or contractual restrictions.
DPIIT only wanted to carry a summary of Nasscom’s dissent, but upon the trade body’s insistence, the full 24-page written submission was added as an annexure to the report, HT has learnt.
Raising concerns on the economics of the AI models, a person closely associated with Nasscom said general-purpose AI models like ChatGPT, where token prices have dropped sharply over the past few years due to commoditisation, operate on very thin margins. “If you compare the token rates from 2023 to today, 95% reduction has happened already. Anyway none of these GPT providers are profitable today. In that scenario, mandatory royalties become difficult to sustain,” the person said
Under the DPIIT committee’s proposal, AI companies would pay royalties into a single, central body instead of negotiating with individual creators. This proposed body, called the Copyright Royalties Collective for AI Training (CRCAT), would collect payments, likely fixed by a government-appointed committee and linked to an AI company’s revenue or scale, and then distribute the money to rightsholders such as authors, artists, publishers and news organisations.
The person quoted above said large language models (LLMs) do not store or reproduce copyrighted content in its original form, but convert text into numerical tokens processed probabilistically. “What exists inside the model are numbers, not words,” the person said, adding that outputs change with every prompt, making it extremely difficult to trace or attribute specific copyrighted content or contest its use.
The person also pointed out that AI developers can further reduce traceability by increasing randomness or adding noise to models, something that is already happening globally after copyright lawsuits against AI companies. This, they said, raises questions about how a royalty or licensing system would be enforced in practice.
Currently, companies like OpenAI, Perplexity AI, are embroiled in lawsuits over training their AI models using copyrighted data.
Big tech companies like Meta and Google do not like DPIIT’s proposal because it would make it harder and more expensive for them to train their AI models, said another person associated with Nasscom.
An industry executive closely associated with a big tech company, which was consulted by the DPIIT committee, argued that AI actually lowers the cost of making content and helps creators do more. The firm also said the rejection of a TDM exception is unfair, citing the European Union, Japan and Singapore, which already allow it. Most importantly, it warned that the proposed mandatory blanket licence will make AI development expensive and legally risky, hurt startups, slow AI adoption in India, and expose developers to long legal battles.
Jameela Sahiba, associate director at tech policy think tank The Dialogue, said that the move would hurt India’s startup ecosystem. “Unlike large technology companies, startups operate with limited capital, lean teams, and short runways; introducing royalty-sharing obligations, and expansive compliance requirements substantially raises the fixed cost of entry into AI development…even modest licensing obligations can be challenging at early stages when revenue is irregular or non-existent, and regulatory overheads can quickly eclipse operational budgets,” she said.
An industry executive associated with another big tech company, who was also consulted by the DPIIT committee, told HT that the way the committee has been set up also played a huge role in the recommendation. “Everyone knew beforehand that the recommendation would favour publishers...” the executive alleged. The committee members include Amicus Curiae to the OpenAI vs ANI case attorney Adarsh Ramanujan and attorney Ameet Datta, who is the lawyer for the digital news group in the same case.
HT had earlier reported that one member had asked to be removed from the committee, stating that they did not consider themself an expert on AI or copyright, but remained part of the panel despite writing to DPIIT seeking their withdrawal.
Another gap pointed out by big tech is that the DPIIT committee’s recommendation contradicts the AI governance framework, released on November 5, which asked the DPIIT panel to “consider a balanced approach, which enables Text and Data Mining… while protecting the rights of copyright holders”.
The sub-committee, led by the Principal Scientific Advisor, submitted the framework to the IT ministry last month for consideration. However, the IT ministry has backed the recommendations by the DPIIT committee, according to the working paper. To be sure, the AI governance framework is not legally binding.














