The State of AI: How War Will Be Changed Forever
By Helen Warrell and James O’Donnell | November 17, 2025
In a recent collaborative exploration between the Financial Times and MIT Technology Review, reporters Helen Warrell and James O’Donnell delve into the profound and complex ways artificial intelligence (AI) is transforming the nature of warfare. Their discussion highlights the promises, perils, ethical dilemmas, and financial incentives surrounding AI’s escalating role in military operations.
AI and the Future Battlefield: A Dystopian Scenario
Helen Warrell paints a sobering picture of potential near-future conflicts, envisioning a scenario in July 2027 where China edges towards invading Taiwan. In this scenario, autonomous drones equipped with AI-guided targeting systems could potentially overwhelm Taiwan’s air defenses. Simultaneously, AI-generated cyberattacks incapacitate critical energy and communication infrastructure, while AI-driven disinformation campaigns dampen global opposition to aggression through widespread social media manipulation.
This dystopian depiction reflects broader anxieties about AI’s deployment in warfare—a technology that military commanders hope will enable faster, more precise combat operations but may also prompt uncontrollable escalation and ethical blind spots. Reflecting historic concerns, figures like former US Secretary of State Henry Kissinger warned of catastrophic consequences arising from AI-driven conflicts.
Ethical Quandaries and Calls for Regulation
Amid these risks, an emergent consensus calls for stringent control over AI’s military uses, particularly regarding autonomous lethal weapons. UN Secretary-General António Guterres has advocated for a comprehensive ban on fully autonomous weapons systems that can operate without human oversight.
Nevertheless, the rapid pace of AI development presents challenges in governance. As the Belfer Center at Harvard notes, many projections about fully autonomous weapon systems may be overoptimistic. The reality of deploying such systems in complex, unpredictable battlefield environments remains fraught with technical, ethical, and legal hurdles.
Practical Military Uses of AI Today
Currently, AI’s military applications primarily involve enhanced planning and logistics, cyber warfare—particularly sabotage, espionage, and information operations—and AI-assisted weapons targeting, as observed in the conflicts in Ukraine and Gaza.
Kyiv’s forces utilize AI-enabled drones capable of dodging electronic jamming, while the Israel Defense Forces have implemented "Lavender," an AI-assisted decision support tool that reportedly identified some 37,000 potential targets in Gaza. Though such systems risk perpetuating biases present in training data, military personnel themselves carry inherent biases, sparking debate about whether AI can sometimes offer more balanced, ‘statistical’ decisions over human judgment.
The Debate Over Autonomy and Control
Some experts argue that AI should augment rather than replace human decision-making in warfare. Anthony King, Director of the Strategy and Security Institute at Exeter University, suggests that complete automation in war is an illusion and that AI primarily enhances military intelligence and insight.
Keith Dear, former UK military officer and current strategist at Cassi AI, emphasizes that existing legal frameworks suffice to manage AI weapons, highlighting that human commanders must remain responsible for any failures. This view challenges the notion that AI’s evolving capabilities demand new, specific regulations.
Changing Attitudes in the Tech Industry
James O’Donnell traces a notable shift in the stance of AI companies regarding military engagement. In early 2024, firms like OpenAI explicitly forbade wartime applications of their technologies. By year-end, however, OpenAI partnered with defense company Anduril to deploy AI systems designed to neutralize drones — marking a significant acceptance of battlefield AI uses.
The motivation behind this shift includes the technological hype portraying AI as a tool for more accurate, efficient, and less fallible warfare, coupled with substantial financial incentives. The Pentagon’s deep pockets and Europe’s growing defense budgets fuel burgeoning investments from venture capitalists eager to back startups innovating in defense AI, with funding already doubling 2024’s totals.
Skepticism and Calls for Responsibility
Despite enthusiasm, many experts urge caution. The prospect of AI increasing precision does not necessarily equate to fewer casualties. Historical parallels from the Afghanistan drone era illustrate how cheaper, easier remote strikes did not reduce overall destruction but expanded it.
Critics like Missy Cummings, former Navy fighter pilot and engineering professor, caution against overreliance on AI models—particularly large language models prone to significant errors when applied in critical military contexts. Reliance on AI outputs, often based on thousands of complex inputs, challenges the robustness of human checks and oversight.
O’Donnell advocates for heightened skepticism amid substantial industry pressure to deploy AI rapidly in warfare, warning against succumbing to inflated promises without rigorous validation.
Balancing Innovation, Ethics, and Oversight
Helen Warrell stresses the importance of balanced scrutiny: while it is vital to question the safety and governance of AI in warfare, it is equally necessary not to lose sight of the technological limitations posed by this nascent but rapidly growing defense sector. The rapid pace and secrecy often surrounding AI arms races risk sidelining necessary public debate and regulatory oversight.
Further Reading and Resources
- Michael C. Horowitz, Director of the Perry World House at the University of Pennsylvania, underscores the imperative of responsibility in military AI development in his Financial Times op-ed.
- The Financial Times tech podcast explores insights from Israel’s defense tech ecosystem and implications for the future of warfare.
- MIT Technology Review’s investigative reports examine OpenAI’s pivot toward battlefield AI applications and detail how U.S. soldiers employ generative AI to process extensive open-source intelligence.
Conclusion
AI is reshaping warfare in profound ways, introducing new capabilities alongside unprecedented challenges. As militaries integrate AI for planning, cyber operations, and targeting, the world must grapple with urgent ethical questions and governance challenges. Vigilant scrutiny, balanced skepticism, and robust regulatory frameworks will be essential to ensure that AI technologies serve to enhance security without exacerbating the destructive potential of modern conflict.
This article is part of The State of AI, a joint series by Financial Times and MIT Technology Review examining the societal impact of artificial intelligence.





