Concerns about the potentially “catastrophic” introduction of artificial intelligence (AI) into the nuclear weapons’ command, control and communication (N3) systems have been raised by the former First Sea Lord and former Security Minister Lord West of Spithead.
An AI expert told the Canary that the potential worst-case scenario for introducing AI into nuclear weapons command and control systems is a situation like the one which caused the apocalypse in the Terminator franchise.
The Terminator films revolve around an event where the AI in control of the USA’s nuclear weapons system gains self-awareness, views its human controllers as a threat, and chooses to attempt to wipe out humanity.
Can’t, or wont?
Lord West, a backbench Labour peer, raised his concerns via a parliamentary written question which was answered by Ministry of Defence minister of state Lord Coaker.
West asked:
What work is being undertaken, and by whom, regarding the integration of AI in nuclear (1) command, (2) control, and (3) communications systems; and whether they have commissioned research to identify and manage high-risk AI applications?
Responding, Lord Coaker said:
The UK’s nuclear weapons are operationally independent and only the Prime Minister can authorise their use. It is a long-standing policy that we do not discuss detailed nuclear command and control matters and so will not be able to provide any additional detail.
“Research to identify, understand, and mitigate against risks of AI in sensitive applications is underway. We will ensure that, regardless of any use of AI in our strategic systems, human political control of our nuclear weapons is maintained at all times.
West confirmed to the Canary that his question was inspired by a recent briefing titled Assessing the implications of integrating AI in nuclear decision-making systems, published on 11 February 2025 by the European Leadership Network (ELN) and authored by Non-Resident Expert on AI at the James Martin Center for Nonproliferation Studies (CNS) Alice Saltini.
The peer said he found Saltini’s paper very useful and:
It’s the first time I’ve seen people really addressing [this issue].
Peer warns about ‘catastrophic’ consequences of introducing AI into nuclear weapons
West made it clear he doesn’t oppose AI, per se.
There’s a lot of interest being shown in AI. I understand all of that. That’s fine, and I think there’s some good work going on
He continued:
I just am very, very nervous about getting AI into command and control and that area of nuclear weapons, because if anything goes wrong, the results can be so catastrophic.
West was First Sea Lord and Commander in Chief of the Royal Navy from 2002 to 2006.
Reflecting on the response he got from the minister, West said:
I just wanted to discover what actually has been going on. And I don’t think the answer really made me think, ‘Gosh, yes, they’re looking at this very carefully.
I got the feeling that there are people saying, ‘Oh, maybe we could do this, that and the other with it’, and I’m not sure what safeguards and what work has been done to make sure that nothing silly is done.
Explaining why he asked the question, in addition to being inspired by Saltini’s briefing, West said:
What I’d like to flag up is to anyone, let’s just be very wary if we do anything in this arena of AI, because [the] results could be so catastrophic.
Reacting to the government’s line which implied it could use, or already be using AI, in “strategic systems”, West said:
It gives a huge potential to all sorts of things.
Appropriate oversight
West said he wanted more reassurance from the government that it is at least being careful with the rollout of AI in the defence sector, including with appropriate oversight. He said:
What I’d like to flag up is to anyone, let’s just be very wary if we do anything in this arena of AI, because [the] results could be so catastrophic
It would be very nice to have some more clarity about this, and some more reassurance about the work that’s actually going on.
He recognised, however, that the government is likely unable to provide a full explanation of its activities in the areas of AI in defence because to do so could hand advantages to the UK’s adversaries. He said:
You can’t tell people what’s happening, because obviously, it’s going to be highly classified
[However], you can reassure people and make sure people understand that work is going on – that can be done.
On oversight specifically, he said:
What I would like to see is that there’s someone who’s been set up to monitor and take charge of this and lay out the ground rules, and I’d like to know who that is.
The government previously had a body called the AI Council which was “an independent expert committee that provided advice to government, and high-level leadership of the Artificial Intelligence (AI) ecosystem”, according to its website, but its last meeting was held in June 2023.
A newer body exists called the AI Security Institute, renamed recently from the AI Safety Institute, which appears to focus more on research into AI rather than providing oversight and governance.
AI has “power-seeking tendencies”
Saltini is described by the ELN as:
specialising in the impact of AI on nuclear decision-making.
She told the Canary:
the government’s response doesn’t satisfactorily address the core problem of nuclear risks generated by AI.
She said the reassurance in the parliamentary response that human political control would be maintained:
rests on the familiar promise of keeping a human in the loop” but added “this approach is dangerously simplistic.
A critical part of nuclear weapons development and maintenance is choices about the visibility of various parts of the weapon systems for adversaries because that visibility dictates how other states react to certain actions by nuclear-armed countries.
Saltini said:
the commitment to “human oversight […] mask critical vulnerabilities.
As nuclear arsenals modernise under intense geopolitical pressure, integrating AI into nuclear decision-making carries a very real risk of unintended escalation
Not every nuclear state has made an explicit commitment to human oversight, and even if they had, there is no straightforward way to verify these promises, leaving room for dangerous misinterpretations or misunderstandings of countries’ intentions.
She explained that:
AI tools are not perfect and have significant limitations for high-stakes domains” such as nuclear weapons.
They are prone to ‘hallucinations,’ where false information is generated with high confidence, and their opaque ‘black box’ nature means that even when a human is in the loop, the underlying processes can be too complex to fully understand.
“This is further compounded by cyber vulnerabilities and our inability to align AI outputs with human goals and values, potentially deviating from strategic objectives.
She went on to hypothesise that introducing AI into nuclear weapons command and control systems could precipitate a situation like the one which leads to the apocalypse in the Terminator franchise. She said:
As these systems gain greater operational agency, they may display power-seeking tendencies, potentially leading to rapid and unintended escalation in high-stakes environments. All of these limitations persist even when states maintain human oversight
However, she did say that AI could have safer applications in the defence sector.
Generally speaking, when applied narrowly—with built-in redundancies and rigorous safeguards—AI can efficiently synthesise large volumes of data in a timely manner, support wargaming scenarios, and enhance training.
In the nuclear weapons sector specifically, she said could “optimise logistics by streamlining maintenance schedules for nuclear assets and enhancing overall system efficiency, augmenting human capabilities and improving performance, rather than automating decisions.
The answer on AI in nuclear weapons is not reassuring
The Campaign for Nuclear Disarmament (CND) said it strongly opposes the introduction of AI into systems related to nuclear weapons, as well as nuclear weapons themselves.
Reflecting on the minister’s response, CND General Secretary Sophie Bolt said:
Their answer is not particularly reassuring.
Perhaps the Prime Minister is the only person who can authorise the use of nuclear weapons, but how much will the decision on what to do depend on information supplied by AI?
Even if the PM has ultimate control, they would probably be ‘advised’ by AI systems that are there to provide possible strategies relevant to the perceived situation.
Research into the risks of AI in sensitive applications is most definitely needed, but in the meantime, it seems that those AI systems already in the system will continue to operate.
Bolt said the focus should be on de-escalation and disarmament, rather than introducing new technologies into nuclear weapons systems. She continued:
It would be easier, cheaper and safer for the government to spend time on negotiating nuclear arms reduction and eventual disarmament rather than trying to take part in a race to achieve some high tech goal that, even if achievable, will only be superseded by newer, more elaborate systems.
What is needed is a break in this technological anti-weapon – weapon cycle and a move to serious, in good faith, disarmament negotiations as required by our obligations under the NPT.
Featured image via the Canary