As the world grapples with deepfakes, AI companies agree to a set of principles - Hindustan Times

As the world grapples with deepfakes, AI companies agree to a set of principles

Apr 24, 2024 11:51 AM IST

Top AI companies including Meta, Google, Anthropic, Microsoft, OpenAI, Stability.AI and Mistral AI will rework data sets, detection techniques and not release new models without evaluation

Could this be the solution, even if to an extent, to the rage of artificial intelligence (AI) generated deepfakes and child sexual abuse materials? As AI tools continue to improve at a rapid pace, bad actors are using them to create ‘real’ manipulated content. Thorn and All Tech is Human, both non-profits, have managed to bring AI companies to the table in an attempt to create new AI standards, particularly for the safety of children.

Examples of how AI is used to distort images. (Official handout.)
Examples of how AI is used to distort images. (Official handout.)

The principles include a re-look at the training data used for AI models, as well as the need for watermarking AI generations and developing new detection solutions that’ll prevent generative tools from creating AI-generated child sexual abuse material, or AIG-CSAM. At this time, 11 tech companies have signed up, including Meta, Google, Anthropic, Microsoft, OpenAI, Stability.AI and Mistral AI.

Now catch your favourite game on Crickit. Anytime Anywhere. Find out how

“We find ourselves in a rare moment, a window of opportunity, to still go down the right path with generative AI and ensure children are protected as the technology is built,” says Thorn, in a statement about the “Safety by Design for Generative AI” principles. The intent remains, to widen scope. “Over the coming weeks, we will be adding additional companies to the list of key industry players committing to the Safety by Design generative AI principles,” says David Polgar, Founder & President at All Tech Is Human.

The window of opportunity that Thorn is referring to, may be closing fast. A recent illustration being in March, when a review of Meta’s ad library for Facebook and Instagram, after the two platforms displayed ads with a blurred and fake nude image of a young celebrity, pointed to an app called Perky AI.

“It’s not science fiction coming at some point in the future, possibly or hypothetically,” summarised Sen. Richard Blumenthal, (D-Conn.), chairman of the committee, at the hearing of US Senate’s subcommittee on privacy, technology and the law, earlier this month. The Perky ads were discussed, at this hearing.

Recently, Microsoft Bing and Google Search were found to be showing deepfake images as part of search results for specific search phrases. Later, both companies confirmed the materials were identified and removed.

A critical principle that AI companies have signed up for, includes a serious relook at the data sets that are used to train AI models. Core to this is early detection of CSAM and child sexual exploitation material (CSEM) from these models. Meta, Microsoft and others are targeting risks to children, “alongside adult sexual content in our video, images and audio generation training datasets.”

“This commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material and other forms of sexual harm against children,” says Courtney Gregoire, Microsoft’s chief digital safety officer.

In December, researchers from Stanford University’s Internet Observatory said they’d found more than 1,000 images of child exploitation in a popular open-source image database, called LAION-5B. This is used to train generative AI tools, such as the popular and incredibly realistic text-to-image generator Stable Diffusion 1.5. Though Stability AI did not create or manage that database, it was immediately removed from the process.

AI companies understand it is important to identify generated content from real ones, and also the source of generations in case of offensive content generated by bad actors, with an intent to harm. Meta, Microsoft, Google and others insist they are working to deploy solutions that embed signals in the content as part of the process for generating an image, audio clip or video.

Stability AI, in its commitment says the intention is to “disallow the use of generative AI to deceive others for the purpose of sexually harming children, and explicitly ban AIG-CSAM from our platforms.”

The other solution is watermarking AI generated content, to identify the creator and source. There has been progress on that front. Meta and OpenAI, for instance, had confirmed earlier this year that any generations on their platforms will now include watermarks or labels. “This action aids in distinguishing between human and synthetic content, crucial for safeguarding user privacy and combating the proliferation of deepfakes,” Nilesh Tribhuvann, Founder and Managing Director, White & Brief Advocates & Solicitors, told HT at the time.

There is also the Adobe led Coalition for Content Provenance and Authenticity (C2PA), which includes Google, Microsoft, Intel, Leica, Nikon, Sony and Amazon Web Services, which is pushing the case for “Content Credentials” with every generated content.

Elevate your career with VIT’s MBA programme that has been designed by its acclaimed faculty & stands out as a beacon for working professionals. Explore now!

See more

Get latest updates on Petrol Price along with Gold Rate , Today Weather and Budget 2024 at Hindustan Times.

Share this article
Story Saved
Live Score
Saved Articles
My Reads
Sign out
New Delhi 0C
Thursday, June 13, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On