Ramaswamy slams Big Tech AI creators amid outcry over Gemini's racial bias - Hindustan Times

Vivek Ramaswamy lambasts Big Tech AI creators amid outcry over Gemini's racial bias

Feb 26, 2024 11:14 PM IST

Vivek Ramaswamy called Big Tech AI creators human beings who are “programmed” by the incentive structures created by their employers.

After branding Google's latest AI roll-out 'Gemini' a “global embarrassment” over its historical image generation flaws, Vivek Ramaswamy, the biotech entrepreneur and former presidential hopeful, took a dig at Big Tech AI creators, asserting that they are “programmed” by the incentive structures created by their employers.

Vivek Ramaswamy (AP)
Vivek Ramaswamy (AP)

Google is trying to resolve concerns about its new AI-powered tool, Gemini, which has been hailed as a breakthrough in AI picture generation, as people accuse it of over-correcting and producing historically inaccurate photos, labeling it "too woke."

HT launches Crick-it, a one stop destination to catch Cricket, anytime, anywhere. Explore now!

The debate centers on the tool's ability to provide images representing genders and races that are historically erratic, such as depicting World War II troops and America's founding fathers as women and people of varied ethnic backgrounds, which deviates from reality.

Ramaswamy weighs in on debate over Google's AI Chatbot Gemini

Ramaswamy was responding to American-born software engineer and tech guru Marc Lowell Andreessen who termed the “apparently bizarre output” as 100 percent intended, adding that big tech AI develops its content by exactly implementing its designers' ideological, radical, and biased goal.

“I know it’s hard to believe, but Big Tech AI generates the output it does because it is precisely executing the specific ideological, radical, biased agenda of its creators. The apparently bizarre output is 100% intended. It is working as designed,” Andreessen wrote on billionaire Elon Musk-owned X (formerly Twitter).

“The creators of Big Tech AI are human beings who themselves are “programmed” by the incentive structures created by their employers: cushy high-paying jobs if you say the right things, fired like James Damore if you say the wrong things. It’s two layers of programming at work,” Ramaswamy reacted while reposting Andreessen's post.

Who is James Damore?

Fired Google software engineer James Damore, who authored a sexist manifesto criticising Google's efforts to reduce the gender gap, claimed the search engine company discriminates against conservative white men.

Damore's contentious 10-page document, posted to an internal Google message board in August, claimed that women have a low representation in technology due to "personality differences" between the sexes, not due to workplace discrimination.

Earlier, Ramaswamy commented on Gemini AI issue, calling the Google AI chatbot "blatantly racist" and blaming the firm for programing their employees "with broken incentives."

“The globally embarrassing rollout of Google’s LLM has proven that James Damore was 100% correct about Google’s descent into an ideological echo chamber. Employees working on Gemini surely realized it was a mistake to make it so blatantly racist, but they likely kept their mouths shut because they didn’t want to get fired like Damore. These companies program their employees with broken incentives, and those employees then program the AI with the same biases," he wrote.

Elon Musk, CEO of Tesla, has previously lambasted the AI chatbot. "I'm glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all," he wrote in a post on X.

What was Google's response?

Jack Krawczyk, senior director of Gemini Experiences, acknowledged the problem, adding that while the tool generates a diverse range of people worldwide, it was "missing the mark" in historical circumstances.

"We're working to improve these kinds of depictions immediately," he said. Google has paused the tool's capacity to generate photos of people while they seek to fix the errors.

This is not the first time AI has faltered while dealing with real-world diversity issues. Google faced backlash nearly a decade ago when its pictures app incorrectly labeled a shot of a black couple as "gorillas."

OpenAI, a competitor in the AI area, has also been accused of promoting stereotypes through its Dall-E picture generator.

Share this article

    Follow the latest breaking news and developments from India and around the world with Hindustan Times' newsdesk. From politics and policies to the economy and the environment, from local issues to national events and global affairs, we've got you covered.

Story Saved
Live Score
Saved Articles
My Reads
Sign out
New Delhi 0C
Saturday, April 20, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On