“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”
An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were “sobering.”
“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”



Typical topics: machine vision, scientific papers about machine vision, source code implementing various machine vision algoritms, etc.
Typical failure modes:
Typical methods of asking: “can you find a scientific article explaining the use of method A”, “can you find a repository implementing algorithm B, preferably in language C”, “please locate or produce a plain language explanation of how algorithm D accomplishes step E or feature F”, “yes, please suggest which functions perform this work in this project / repository”.
Typical models used: Chat and Claude. Chat seems more overconfident, Claude admits limitations or inability more frequently, but not as frequently as I would prefer to see.
But they have both consumed an incredible amount of source material. More than I could read during a geological age or something. They just work with it like with any text, no ground truth, no perception of what is real. Their job is answering questions and if there is no good answer, they will frequently still answer something that seems probable.