A group of nuclear weapons designers and scientists at the Lawrence Livermore National Laboratory conducted a what-if experiment several years ago, deploying supercomputers to simulate what happens to a nuclear weapon from the moment it leaves storage to the point when it hits a target.
They methodically worked down a checklist of all the possible conditions that could affect the B-83 strategic nuclear bomb, the most powerful and one of the most modern weapons in the US arsenal, officials said.
Such checks typically have been carried out by taking bombs and warheads apart; scrutinising and examining data from earlier nuclear explosive tests. This time, however, the scientists and designers relied entirely on supercomputer modeling, running huge amounts of code.
Then came a surprise. The computer simulations showed that at a certain point from stockpile to target, the weapon would "fail catastrophically," according to Bruce T. Goodwin, principal associate director at Livermore for weapons programs.
Such a failure would mean that the weapon would not produce the explosive yield expected by the military - either none at all, or something quite different than required to properly hit the target.
The episode, details of which remain classified, offers a glimpse into a rarely seen but potentially significant shift in the nuclear weapons era. According to scientists and officials, the US weapons laboratories, armed with some of the fastest computers on the planet, are peering ever deeper into the mystery of how thermonuclear explosions occur, gaining an understanding that in some ways goes beyond what was learned from explosive tests, which ended in 1992.
The Obama administration has said that with computing advances, the United States will never need to resume nuclear explosive testing.
(In exclusive partnership with The Washington Post)