LSE - Small Logo
LSE - Small Logo

Feras A Batarseh

June 20th, 2025

A warning about the risks of drones and ChatGPT

0 comments | 13 shares

Estimated reading time: 5 minutes

Feras A Batarseh

June 20th, 2025

A warning about the risks of drones and ChatGPT

0 comments | 13 shares

Estimated reading time: 5 minutes

The use of war drones and large language models like ChatGPT has seen explosive growth over the past few months. Both technologies are based on artificial intelligence. It’s surprising that more people are not raising privacy and cybersecurity objections. Feras A Batarseh writes that the overuse, underdefinition and underregulation of AI will lead to unexpected and devastating outcomes.


Countless AI applications exist in almost every domain of our lives, from healthcare to energy and education, among others. But at this point in 2025, the two most “mainstream” ones, the most visible and prominent, have been large language model (LLM) chatbots and war drones.

We can say that ChatGPT and other chatbots democratised AI usage—though many may beg to disagree. And in the Middle East (the Levant) and Eastern Europe (Russia/Ukraine), war drones are playing a critical role in military and surveillance operations.

In light of this, I set out to consider the following question: is this what AI has been developed for, all along?

Alan Turing’s objections to AI

In his 1950 paper, Alan Turing presented what he refers to as the “nine objections to AI”, which are arguments against machine intelligence, including ones relevant to ethical and theological considerations and consciousness arguments.

That was 75 years ago. Turing was right! However, he missed a few foundational components such as explainability (the need to understand what is behind AI decisions), regulatory aspects, privacy issues, and the need for global collaboration to address cross-border security threats.

Unsupervised algorithms

Also outside Turing’s model is the revolution of big data, which many deem as the prime reason for the rekindling of artificial intelligence after the AI winter. Many recent successful learning algorithms are supervised and very data hungry, requiring ample amounts of data, labelling and extensive pre- and post-processing. However, LeCun, Bengio, and Hinton predict that what will reign in the future are unsupervised approaches, models that require much less data processing and no labelling.

These emerging generative, computer vision and deep learning approaches are the backbone of LLM chatbots and war drones.

Drone experiments cost lives

Real-world contexts account for most of the accuracy tests for war drones—besides conventional training and transfer learning (when knowledge learned in one situation is applied to solve a different problem). Ad-hoc testing leads to miniscule accuracy improvements, while real-world contexts are always more beneficial, and challenging, to AI models. This experimental deployment of drones in wars is surely causing many misclassifications that lead to destructive operations, potentially harming innocent lives.

Let’s assume that these models are provided with more data and are “perfectly” trained (something generally not possible for many statistical reasons) and that they perform tasks seamlessly and within the allowed constraints. Even if these assumptions were true, there should still be minimal preventive measures in place for securing those machines against adversarial players and cyber breaches.

Cybersecurity in drones

Do we have guards against unwarranted control of these deadly machines?

Is cybersecurity embedded in these systems?

At this point, the security measures protecting war drones are still minimal and insufficient, and the legal frameworks are crummy. Think of existing defence measures in many countries around the world, including the newly proposed Golden Dome missile defence system and Space Force satellite activities in the US. Cybersecurity is generally an afterthought in these unconventional contexts.

The more complex drone-driven defence systems we construct, the more offence opportunities are created. Laws governing drones manufacturing, deployment and trade are needed to enforce more testing, fewer misclassifications and less use in sensitive war scenarios, mainly to save innocent human lives.

Blindly feeding our data

With large language models, the masses are racing to use their chatbots due to the convenience and ability to increase productivity. In the process, users are constantly (and somewhat blindly) providing private data and information that is unredeemable to them—information shared cannot be removed.

Governments and policymakers are doing very little to change the status quo, or at least are not moving in a speedy manner. Multiple new “AI policies and acts” exist, but most of these proposed laws are very high level and lack a reflection of the real technical complexities of AI.

To maintain and retain our digital identities in the virtual metaverse, LLM chatbot users might make the case for users owning their own data (via better legal frameworks), and for less privacy invasion.

The security risk of outsourcing AI

Scam and cross-border cybersecurity incidents such as what happened recently against Coinbase are now much more frequent, and the privacy challenge has become more obvious. We must start bringing AI development and testing back to more reliable channels, including back to more localised cloud structures, within protected jurisdictions or localities. Outsourcing such services is not worth it anymore.

Other than a few initiatives, the US has no comprehensive federal legislation regulating AI. Instead, state-level regulation is becoming increasingly common, with most states enacting laws to address various aspects of AI development and use. The ambiguity in these legal attempts have led to several AI-related court cases, primarily involving copyright infringement, bias, AI-generated content and privacy aspects. Several lawsuits have been filed against most Big Tech companies.

As governments and industry race towards “more AI”, privacy and security issues will be exacerbated by the convergence of more hardware and quantum computing. Without extensive validation, the current state of overuse, underdefinition and underregulation of AI will lead to unexpected devastating outcomes.

I am certain that many AI scientists and thinkers around the world would agree that this is not what AI has been developed for all along.

You may also be interested in this blog post co-authored by Feras Batarseh and Abhinav Khumar (2020):

The use of robots and artificial intelligence in war


Sign up for our weekly newsletter here.


  • Author’s disclaimer: No LLM chatbot (ChatGPT or any other) was used in any form for this article.
  • The author thanks E. Donald Elliott (Yale University and Scalia Law School at GMU), for his inputs and valuable discussions.
  • This blog post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
  • Featured images of war drones and LLM chatbots provided by Shutterstock.
  • When you leave a comment, you’re agreeing to our Comment Policy.

About the author

Feras A Batarseh

Feras A Batarseh (PhD) is a Professor with the Department of Biological Systems Engineering (BSE) at Virginia Tech (VT). He holds courtesy appointments with the Commonwealth Cyber Initiative (CCI) and the Department of Electrical and Computer Engineering (ECE) at VT, and is the Director of A3 (AI Assurance and Applications) Lab. Dr Batarseh’s research spans the areas of trustworthy AI, cyberbiosecurity, intelligent water systems, and data-driven public policy. More information: ai.bse.vt.edu E-mail: batarseh@vt.edu

Posted In: Management | Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.