OpenAI's GPT-4: A game-changer in the era of AI, but its 'hallucinations' make it flawed
OpenAI's GPT-4 has left researchers and academics stunned with its advanced capabilities in interpreting text and images to provide accurate responses to tricky questions. However, the technology's flaws cannot be ignored, including the potential for factual errors, harmful content, and spreading disinformation to suit its bias.
The world is witnessing a pragmatic shift towards a new era of machine intelligence and if your mind hasn’t been flabbergasted by its possibilities then you are not paying attention. A new revolution has arrived where technology is on the precipice of permanently reshaping society. Is it for the better or to give birth to a dystopian reality, is a question only time will answer. For now, a technology still in its nascent stage has overwhelmed the entire human generation with anxiety that future may look very little like the past.
The skills of newly launched GPT-4, latest product from OpenAI months after it sent tremors across the world with its game-changer tool ChatGPT, are overwhelming researchers and academics and we still don’t know its full potential. One of them wrote, ‘GPT-4 had caused me to have an “existential crisis,” because its intelligence is way more powerful than tester’s own dwarfish brain.’ Within a couple of days GPT has aced America’s top examinations, acing Uniform Bar Exam, Biology Olympiad, LSAT to name a few. Its ecstatic performance is pegged higher than 90% human test takers. With high reasoning capabIlities, wider knowledge it can now study an image to provide answers. You can sense its improved sophistication when it gives accurate response to tricky questions and cracks better jokes.
GPT-4 has overwhelmed the entire world with its super humancapabilities
According to Open AI, its upgraded entity, GPT-4, is more capable and accurate than ChatGPT and can publish astonishingly accurate solutions on a variety of tests. It is multimodal, so can interpret both text and images to solve queries. Microsoft is using it to revolutionise its search engine, Bing, payments company Stripe is using it for payments fraud, educator Khan Academy is creating personalised learning experiences for students and Morgan Stanley will use it to help guide its bankers and their clients.
GPT-4 is an enabler being used by millions of startups claiming to use its secret recipe to create new products and improve operational effectiveness of their businesses that will revolutionise legal administration, medical diagnosis, academic research, marketing strategy and even mundane chores. At the forefront of this enablement are tech giants, Microsoft and Google fighting it out to use generative AI to dominate the world wide web by transforming search engines.
However, this disruptive technology is being considered a threat too, if it does it all, what will be left of us human to do? ‘The worst A.I. risks are the ones we can’t anticipate. And the more time I spend with A.I. systems like GPT-4, the less I’m convinced that we know half of what’s coming’, states Kevin Roose in an opinion piece in New York Times. But Professor Charlie Beckett, Founding Director, Polis in his column in The Guardian differs, ‘AI is not about the total automation of content production from start to finish: it is about augmentation to give professionals and creatives the tools to work faster, freeing them up to spend more time on what humans do best. ’
Improved version of ChatGPT hasn't overcome Hallucinations
‘Hallucinations’ is a big challenge GPT has not been able to overcome, where it makes things up. It makes factual errors, creates harmful content and also has the potential to spread disinformation to suit its bias. ‘We spent six months making GPT-4 safer and more aligned. It is 82 percent less likely to respond to requests for disallowed content and 40 percent more likely to produce factual responses," OpenAI has claimed. Its founder Sam further admits, despite the anticipation, GPT-4 "is still flawed, still limited, but it still seems more impressive on first use than it does after you spend more time with it."
Amidst the fascinating results the flaws can’t be ignored. ’Any Large Language Model is in a sense the child of the texts on which it is trained. If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way. The sins of the bots are coming to resemble the sins of their creators.’ writes Stephen L. Carter is a Bloomberg Opinion columnist.