A new RAND report says ideas like mutually assured destruction and minimal deterrence strategy offer a lot less assurance in the age of intelligent software.
Artificial intelligence could destabilize the delicate balance of nuclear deterrence, inching the world closer to catastrophe, according to a working group of experts convened by RAND. New smarter, faster intelligence analysis from AI agents, combined with more sensor and open-source data, could convince countries that their nuclear capability is increasingly vulnerable. That may cause them to take more drastic steps to keep up with the U.S. Another worrying scenario: commanders could make decisions to launch strikes based on advice from AI assistants that have been fed wrong information.
You can read the rest @
Or, as we all know, the AI could become self-aware and decide to kill us all.
Nuclear weapons are one of humanity's worst ideas. AI is even worse. Put the two together, and you have the Mother of all Screwups.
And don't forget this - both were brought to you by the "smartest" persons of their time. That should tell you something about how "smart" they actually were.