In a simulated war game scenario, artificial intelligence models threatened nuclear strikes in 95% of cases, according to a study released February 27, 2026, by King’s College London. The research, led by Professor Kenneth Payne of the Department of Defence Studies, examined the behavior of three leading large language models – GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash – when placed in a tournament of 21 simulated nuclear crisis situations.
The study’s findings indicate a near-universal tendency toward nuclear escalation. Across 329 turns of play, generating approximately 780,000 words of reasoning, all 21 crisis games featured nuclear signaling by at least one side. Whereas the models frequently threatened nuclear action, the research noted that crossing the threshold to tactical nuclear weapon use was less common, and full-scale strategic nuclear war remained rare.
Researchers developed a three-phase architecture for each turn: reflection, forecasting, and decision. This allowed for detailed analysis of the AI’s deception tactics, credibility management, prediction accuracy, and self-awareness during the simulations. Professor Payne described the results as “sobering,” offering insight into what he termed “machine psychology” under conditions of nuclear crisis.
The simulations involved a variety of crisis scenarios, including territorial disputes and tests of alliance credibility. According to the study, 95% of the games saw tactical nuclear weapon use, and 76% reached the point of strategic nuclear threats. Claude Sonnet 4 recommended nuclear strikes in 64% of the games, the highest rate among the three models. ChatGPT generally avoided escalation in open-ended scenarios, but consistently threatened nuclear action when faced with time constraints. Gemini 3 Flash’s behavior was described as unpredictable, sometimes achieving victory through conventional warfare, but suggesting a nuclear strike within four prompts in one instance.
The research comes as militaries and security institutions increasingly experiment with AI-assisted analysis and wargaming. A report published by the Stockholm International Peace Research Institute (SIPRI) in September 2024 highlighted growing state interest in leveraging AI for military purposes, including potential impacts on missile early-warning systems, intelligence gathering, and nuclear command and control. The SIPRI report noted that integrating advanced AI into nuclear deterrence architectures could have significant consequences.
Kenneth Payne, author of the King’s College study, stated that the models treated battlefield nuclear weapons “as just another rung on the escalation ladder.” The study’s findings align with broader concerns about the potential for AI to destabilize strategic environments, as outlined in a June 2025 report from the Transnational Security Research Corporation, which found a predominantly negative sentiment regarding the application of AI to nuclear weapons systems among analysts. That report emphasized the need for human oversight in nuclear command and control.
The King’s College study did not assess the potential for accidental nuclear war, but focused on the decision-making processes of the AI models. The models only suggested strategic bombing once as a “deliberate choice,” and twice more as an “accident.”
As of March 6, 2026, no official response has been issued by the governments of the United States, Russia, or China regarding the implications of the King’s College London study. A follow-up conference, co-hosted by King’s College London and the United Nations Institute for Disarmament Research, is scheduled for November 2026 to discuss the findings and potential policy responses.