close_game
close_game

We only realise benefits of AI, if we’re aware of risks: Professor Arvind Narayanan

By, New Delhi
Nov 16, 2024 07:24 AM IST

Speaking at a virtual session of the 22nd Hindustan Times Leadership Summit, Arvind Narayanan outlined several critical concerns about AI’s expanding role.

The rapid rise of artificial intelligence (AI), with millions using generative tools like chatbots and image generators, comes with a hefty price tag. Princeton University professor Arvind Narayanan, in his book AI Snake Oil, calls it a “major societal problem”’ as consumers remain largely unaware of AI’s inevitable pitfalls.

Arvind Narayanan, professor of Computer Science, Princeton University at the Hindustan Times Leadership Summit 2024 on Friday. (HT Photo)
Arvind Narayanan, professor of Computer Science, Princeton University at the Hindustan Times Leadership Summit 2024 on Friday. (HT Photo)

Speaking at a virtual session of the 22nd Hindustan Times Leadership Summit, Narayanan outlined several critical concerns about AI’s expanding role, including how to mitigate risks, the need to guard against having AI make consequential decisions about loans, jobs and criminal justice and, crucially, the need for institutional safeguards and regulation, especially for issues like deepfakes.

“AI being widely available to consumers isn’t necessarily bad. For the first time, anyone can access powerful AI systems previously available only to companies and governments. That’s largely positive. But, we can only realise the benefits if we’re very aware of the risks,” said Narayanan.

Narayanan’s research work revolves around tempering what many see as reckless cheerleading of new technologies without adequate safeguards. On the AI front, Narayanan believes that as is natural with anything new, problems are not immediately clear to everyone.

“People need better information about these systems’ limitations, and companies need to be more transparent about them. For risks like deepfakes, we need regulation. It’s not enough to expect responsible use — there will always be bad actors,” he said.

The problem of deep fakes to make non-consensual nude images, predominantly targeting women, is an example of how unprecedented harms can be perpetuated. “People are using AI to create non-consensual nude images, affecting hundreds of thousands of women in every country. Many such misuses are possible. While it’s primarily individual responsibility, companies should improve their products,” he added.

He highlighted another concern - predictive AI. Unlike the technology shown in the film Minority Report, real-world applications are problematic. “Predictive AI makes consequential decisions about people applying for jobs, loans, or in criminal justice systems. This dubious AI predicts who will commit crimes or repay loans. These predictions are difficult, and the systems are being used unjustly,” he said.

He emphasised: “I think as a society, we should be really careful about them. We need more regulation. We need companies to think more carefully about how they’re using these systems”.

On whether AI tools can overcome bias, hallucinations and factual inaccuracies, Narayanan noted progress in tackling bias through broader, more diverse training data. “The hallucination problem has been harder to solve,” he said, adding that chatbots retrieving and summarising web information, rather than answering from memory, reduce hallucinations but don’t eliminate them.

Narayanan underscored that the problem of incorrect information from generative AI — which can happen due to a variety of reasons, including a prompt that isn’t completely contextual, the AI’s outdated data set or pure hallucination — is “getting out hand”, and that the counter must come from a collective.

The timeline for completely solving these issues remains unclear. “I don’t know when it’ll be solved, if ever, so we must be more careful,” Narayanan added.

While countries discuss AI regulation, currently only self-regulation exists. Adobe’s Content Credentials initiative, tracking generated content’s ownership and details, has gained support from Microsoft, Qualcomm, Leica, Nikon and Shutterstock.

“Regulating AI is possible, but we must remember AI isn’t just one thing,” Narayanan said.

Beyond generative AI, there’s AI powering social media feeds, self-driving cars and more and regulation must be seen from the perspective of tackling separate problems. “When we think about all of these separately, and we think about regulation separately, I think that becomes a much more tractable problem than just asking, how can we do AI regulation.”

He cited how some of this already works: “Self-driving cars are already heavily regulated in many regions. Banking AI faces regulation because banking itself is regulated. The best approach isn’t focusing on AI specifically, but on the harms we’re concerned about”.

Whether regulators will adopt this approach remains to be seen.

Recommended Topics
Share this article
Get Current Updates on India News, Weather Today, Latest News and Top Headlines from India.
See More
Get Current Updates on India News, Weather Today, Latest News and Top Headlines from India.
SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, January 15, 2025
Start 14 Days Free Trial Subscribe Now
Follow Us On