“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”
An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were “sobering.”
“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”



Yeah, we figured that one out back in… checks notes 1983. There is a reason why WarGames still holds up as an amazing movie even though the technology it depicts is far outdated.
War Games was my first thought when reading this, but it seems like the AI was smarter in the movie than current AI.
Meanwhile NORAD probably hasn’t upgraded too much since the movie released. :p
we’d be lucky to have WOPR.
His name is Joshua dammit! /s
I watched that movie for the first time a few months ago after listening to a pod cast in nuclear war. It was excellent! Very relevant to today. Acting was great. I can see why it’s a cult favourite.
Yet another Torment Nexus type situation.