Scientifically Speaking | In defence of AI… and human intervention - Hindustan Times
close_game
close_game

Scientifically Speaking | In defence of AI… and human intervention

ByAnirban Mahapatra
Oct 04, 2023 01:18 PM IST

The possibilities of AI assisting in scientific discovery are endless. But what happens when it’s confronted with past biases and unknown revolutionary ideas?

In the classic 1983 hacker movie WarGames, a supercomputer shields itself from human intervention. As the situation escalates, the computer, unable to differentiate between simulation and reality, pushes the world to the brink of World War III. In a desperate bid to prevent the computer from starting a nuclear war, the protagonists make the computer play a game against itself. This forces the computer to understand the concept of a no-win scenario, leading it to concede that the only winning move is not to play. The crisis is averted.

Undoubtedly, science is undergoing a seismic shift(Photo by Markus Winkler on Unsplash) PREMIUM
Undoubtedly, science is undergoing a seismic shift(Photo by Markus Winkler on Unsplash)

Last week, while delivering a keynote talk on the future of science publishing at the Publisherspeak conference in Washington DC, I pointed to WarGames in a slide as a cautionary backdrop. It’s not difficult to imagine a scenario in which humans involved in the process of scientific discovery and dissemination take a hands-off approach. Artificial Intelligence can do the research, write the paper, find the right journal, and submit it. Other AI systems can then review the paper, accept it, and publish it without any human ever seeing it.

Undoubtedly, science is undergoing a seismic shift. AI can autonomously collect and analyse vast datasets, discern patterns that might elude even the most astute human researchers, and synthesise these findings into coherent scientific discoveries. AI can serve as a writing assistant to find holes in logic or pose alternative theories and explanations for results. The efficiency and speed with which it does these activities are groundbreaking.

A cover story in The Economist published on September 14, extolled the prospects of AI in turbocharging scientific progress. The article mentioned how AI is permeating nearly every scientific field, though to varying degrees. For instance, the article mentions that 7.2% of physics and astronomy papers in 2022 incorporated AI, compared to 1.4% in veterinary science.

The article points to two potential examples of how AI can expand the potential of science to make new discoveries: the use of AI language analysis (like ChatGPT) to analyse existing scientific articles to uncover overlooked hypotheses or connections; and, “robot scientists" that use AI to hypothesise and conduct thousands of experiments autonomously.

To be clear, I’m optimistic about the prospects of AI fuelling scientific progress. I enthusiastically agree that new tools and approaches can accelerate scientific discovery and innovation. Certainly, the number of scientists who use AI in 2023 is greater than the number quoted for 2022 in the article (ChatGPT, the most popular generative AI tool was launched late last year on November 30). More people will use AI in the coming years.

Given the rapid pace at which AI is advancing, we can assume that in some scientific fields, it will be technically feasible for AI to autonomously perform all the roles in science from discovery to dissemination. But my argument is that allowing AI to do all the work for us would be a mistake. There’s a line between using AI as a tool and using it to remove humans from the scientific process.

The principal process by which scientists communicate findings is through publication in peer-reviewed scientific journals. There are, of course, a few exceptions. In the corporate sector, where there are intellectual property concerns, products may be launched with a patent instead of a scientific paper. In certain fields, in advance of publication, findings may also be posted on preprint servers. But for nearly 350 years, peer review has been a way to establish novelty, accuracy, proof of priority, and importance.

Substituting “democracy” for “peer review” in Winston Churchill’s quote is useful: “No one pretends that peer-review is perfect or all-wise. Indeed, it has been said that peer-review is the worst… except for all those other forms that have been tried from time to time.”

For all its flaws, peer review vetted the structure of DNA and the existence of the Higgs-Boson. Here also AI can help to revolutionise the process by matching articles with suitable journals, predicting acceptance chances, and helping to find reviewers with suitable expertise. AI can even help test competing hypotheses and write articles.

In my keynote, I cautioned that AI will rapidly transition from being a powerful tool that assists in scientific discovery to crossing the threshold into decision-making without human input or intervention. As we inch closer to a fully automated discovery process, we face a looming question: If we fully rely on AI, do we risk sidelining the human creativity and insight that have been the bedrock of scientific breakthroughs for centuries?

The convenience of leaving science to the machines is undeniable (we could all just sit on beaches and drink margaritas!), but the pitfalls of over-reliance on automation are evident. It is true that AI will assist in making many of us more productive in our output. It is also equally true that in the relentless pursuit of efficiency, many jobs that humans currently do will be replaced by AI. There will have to be a balance between AI and human intelligence to ensure that the essence of science, driven by human curiosity, creativity, and intuition, remains intact.

AI systems are trained on existing data. These systems, by prioritising existing knowledge, might falter when faced with revolutionary ideas that don't fit current paradigms. Their dependence on historical data might sideline groundbreaking research. Moreover, there's a risk that relying solely on algorithms to decide what’s important or correct might deter human authors from exploring unique or unconventional topics, leading to a homogenised research landscape.

Additionally, these systems, if not carefully designed, can inherit, and perpetuate biases from the data they are trained on. This could lead to skewed outcomes. The challenges are manifold: Loss of human creativity, potential biases in AI systems, and the risk of sidelining unconventional topics.

While AI makes information cheap and easy to produce, there's an irreplaceable value in expert curation by human peers. AI cannot inherently distinguish between information and misinformation.

Finally, there's a social and behavioural aspect to how scientists work and interact, which goes beyond mere technical capability. Science, like every other aspect of society, is a deeply personal endeavour. People with diverse perspectives must decide what scientific problems are worth pursuing. We cannot leave it all to the machines.

Anirban Mahapatra is a scientist by training and the author of a popular science book on COVID-19. The views expressed are personal

See more

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, September 11, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On