Posted inLatest NewsTechnologyWorld

The technology of war: AI ‘new frontline’ of modern warfare, experts say

GPS disruptions, autonomous drones, and when AI decides who lives and dies – the evolving face of modern warfare

Nations are investing massively in AI-enabled warfare, navigating this new frontier will be essential for stability and human control over future conflicts. Image: Shutterstock

From navigating fighter jets to hunting targets on the battlefield, artificial intelligence is reshaping military planning and strategy at an accelerating pace across the globe.

“We are approaching a ‘threshold’ where AI will take over entire aspects of warfare,” said Abishur Prakash, founder of the Geopolitical Business Inc. and a geopolitical strategist who has written extensively on the topic. “Whether this is carrying out missions or defending areas, AI will become the new ‘frontline’ of how nations approach their defense strategies.”

The lowering costs and easier accessibility of advanced technologies like AI, drones, and cyber capabilities are diffusing potentially destabilising tools to state and non-state actors alike.

“Technology is lowering the barrier to entry to higher capability in war,” said Peter Singer, American political scientist, author, professor at Arizona State University, and 21st-century warfare specialist. “Many of the most game-changing technologies, like AI and drones and cyber, are accessible to all actors.”

This proliferation is empowering militaries, terror groups, and rogue states in unprecedented ways that could spark unforeseen escalations according to Prakash. Already, fully autonomous drones deployed by warring factions in Libya have been programmed to identify and “hunt down” targets without human oversight, according to a recent UN report referenced by Prakash.

The Ukraine conflict is offering the latest preview of AI and autonomous weapons systems being deployed in large numbers to disrupt infrastructure and increase battlefield lethality.

Ukrainian forces are using “AI drones that have terrain and geography loaded onto a chip from the get-go, to attack Russian infrastructure autonomously,” explained Prakash.

Singer added that what is occurring in Ukraine with drones and robotics “are just a taste of what is to come, as we see more and more [of these] used in scales of not tens but hundreds and then thousands.”

Neither side is exercising restraint. Russia has been employing Iranian “kamikaze” Shahed drones against Ukrainian cities, while Ukrainian programmers have crowdsourced an AI system called “Saker Scout” designed to hunt Russian armour without direct human input once activated.

“Such next uses of AI raise potential for abuse and war crimes,” cautioned Singer. “But the key is that they don’t change human responsibility. Even if a machine flies itself, a human still ordered it into action.”

Ukrainian forces are using AI drones that have terrain and geography loaded onto a chip from the get-go, to attack Russian infrastructure autonomously. Image: Reuters

When AI decides who lives or dies

Controlling these systems as they grow in complexity and autonomy is emerging as a paramount challenge for militaries and regulators worldwide. While AI brings potential advantages like faster battlefield responsiveness and high-volume data processing, it also creates risks of unintended escalation by machines unable to weigh geopolitical context.

“The question is not whether AI will reach a point where it is advanced enough to take human life, but rather, whether governments (or rogue groups) give AI the right to do this,” said Prakash.

Dr. Yannick Veilleux-Lepage – Political Scientist and Assistant Professor at the Royal Military College of Canada, pointed to the growing use of AI in functions like intelligence analysis, logistics, threat identification, and defensive capabilities like cybersecurity.

However, automated “kill chains” that could enable AI to select and engage targets raise profound ethical risks if humans are removed from the loop.

“If you start thinking about like kill chains…there’s ethical considerations to be asked about removing the human equation within this,” said Veilleux-Lepage.

Militaries like the United States remain committed to keeping humans firmly “in the loop” for now when it comes to taking human life. But the extent of human control desired over autonomous systems could become strained as AI capabilities rapidly advance.

Prakash introduced the concept of a “51/49 ratio” where as long as humans maintain 51 percent control over weapons, the risks remain manageable. “But the moment this is inverted, and AI has more control over weapons than humans, the world enters a twilight zone, where outbreaks of war and hostilities could occur at any moment.”

Automated “kill chains” could enable AI to select and engage targets raise profound ethical risks if humans are removed from the loop. Image: Shutterstock

GPS spoofing, jamming incidents

Regardless of the human role, militaries’ growing reliance on AI systems that fuse vast data sources is also introducing new vulnerabilities that adversaries are actively exploiting. High-profile incidents of GPS jamming and spoofing have spiked this year, particularly in the Mediterranean and over key cities like Cairo and Beirut according to reports.

“This is becoming a massive challenge and danger, especially for commercial airlines,” said Prakash. “Countries are jamming GPS systems, in order to confuse planes.”

However, Singer noted the threat extends beyond military systems. “The bigger danger is how much of the civilian world also now depends on [GPS], which makes Russia’s recent interference with it across Northern Europe so dangerous.”

As GPS denial grows more commonplace, AI could be called upon to compensate by navigating vehicles autonomously – further reducing human control. Veilleux-Lepage highlighted this “democratisation” dynamic where actors rapidly adopt innovative technologies initially developed by major powers.

“While the innovators are pushing out these new technologies, they are not able to keep them to themselves,” he said. “As their usefulness is demonstrated, other actors are able to adopt this technology.”

Escalation dangers

Beyond jamming threats, experts warn that an AI-enabled battlefield could increase the risk of rapid escalations and conflicts sparked by machines making calculations beyond human control.

“Just as on our highways, so too in our wars, many of our laws and codes will have to be updated for the new questions that come from these new technologies,” said Singer.

All three experts raised concerns that AI systems optimised for speed and combat capability may not be designed with resolving escalation pathways and geopolitical context in mind. Giving these systems more autonomy increases the odds of miscalculations between human-led forces on opposing sides.

AI-enabled warfare
An AI-enabled battlefield could increase the risk of rapid escalations and conflicts sparked by machines making calculations beyond human control. Image: Shutterstock

“What if AI decides to attack another soldier, sparking a conflict? Or if AI goes too far, and is disproportional in how it retaliates, taking tensions to new heights?” said Prakash. “Governments want to ensure humans are in charge of geopolitics, not AI.”

As nations like the US, China, and Russia accelerate AI integration across their military capabilities, the interplay between machine autonomy and geopolitical stability will only intensify. Experts argue international cooperation and smarter human failsafes will be essential to navigating this new frontier of warfare.

“Technology has never on its own made the world more peaceful,” warned Singer. “Sticking to ‘old think’ and missing how much change is going on around them” is the biggest danger for military strategists and decision-makers.

A double-edged sword?

While AI introduces new perils, experts also point to the potential positives of the technology in augmenting military capabilities. Veilleux-Lepage noted that AI can “enhance battlefield effectiveness” and “optimise” logistical challenges.

“AI could be used in training to give soldiers individualised instructions and create more realistic simulations and exercises,” he added. “It could optimise logistics like deploying forces, providing food, shelter, and resources.”

AI systems could rapidly process intelligence data flows from signals, radar, and other sources that traditionally require significant manpower. Computer vision and pattern recognition could enable faster threat identification and targeting.

“We could make drones or aircraft able to perceive threats faster and react quicker than human reaction times,” Veilleux-Lepage said. Automated diagnostics could also provide lifesaving treatment more rapidly to wounded soldiers.

AI systems could rapidly process intelligence data flows from signals, radar, and other sources that traditionally require significant manpower

While not dismissing the ethical risks of ceding too much control, Prakash argued AI capabilities can be immensely valuable if regulated with proper human oversight. “As long as humans maintain 51 percent control over weapons, the risks remain manageable,” he said.

Singer highlighted how swarming unmanned systems could revolutionise combat mass and coordination. “What we are seeing in Ukraine and Gaza are just a taste of what is to come, as we see more drones used in scales of not tens but hundreds and then thousands.”

As nations invest massively in AI-enabled warfare, navigating this new frontier will be essential for stability and human control over future conflicts.

Follow us on

Tala Michel Issa

Tala Michel Issa

Tala Michel Issa is the Chief Reporter at Arabian Business and Producer/Presenter of the AB Majlis podcast. Her interviews feature global figures including former Nissan Chairman Carlos Ghosn, Mindvalley's...

Author

  • Tala Michel Issa

    Tala Michel Issa is the Chief Reporter at Arabian Business and Producer/Presenter of the AB Majlis podcast. Her interviews feature global figures including former Nissan Chairman Carlos Ghosn, Mindvalley's Vishen Lakhiani, former US government adviso...

    View all posts Chief Reporter