Nuclear Weapons | Selected Article
ARTICLE – Military Applications of Artificial Intelligence
A special feature in collaboration with The Bulletin of the Atomic Scientists
Advances in artificial intelligence (AI), deep-learning, and robotics are enabling new military capabilities that will have a disruptive impact on military strategies. The effects of these capabilities will be felt across the spectrum of military requirements—from intelligence, surveillance, and reconnaissance to offense/defense balances and even on to nuclear weapons systems themselves.
In this package, five top experts in AI and its potential uses in autonomous weapons and sensing systems weigh in on the moral and practical challenges of managing the explosion of military AI research and development around the world. The goal: to keep fast-paced advances in machine learning from sparking a worldwide AI arms race that poses a new existential risk to humanity.
The Promise and Peril of Military Applications of Artificial Intelligence
The promise of AI—including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making— is driving militaries around the world to accelerate research and development. Here’s why.
Fear of False Negatives: AI and China’s Nuclear Posture
Understanding China’s insecurity about being able to detect “bolt-from-the-blue” attacks is crucial for gauging how it may integrate AI and autonomy into its conventional and nuclear platforms—and how the United States should respond.
Defending against “The Entertainment”
Amid the published angst about AI and its hypothetical threats, more attention ought to be given to the threat that AI-enabled entertainment poses to our brains and our civilization.
Manifestos and Open Letters: Back to the future?
Why UN discussions on the management of lethal autonomous weapons need greater participation by the scientific and research communities and representatives of the private sector. Statements of alarm are not enough.
Why the World Needs to Regulate Autonomous Weapons, and Soon
If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by others, resulting in destabilizing global arms races. And that’s only a small parts of the problem.
The Stanley Center values independent and accurate journalism and the role it plays in building better-informed, just, and accountable societies. We enable rigorous and impactful reporting on topics related to our three issue areas—mitigating climate change, avoiding the use of nuclear weapons, and preventing mass violence and atrocities. Find out more.
These articles originally ran in the Bulletin of the Atomic Scientists.