LSE - Small Logo
LSE - Small Logo

Abhinav Kumar

Feras A. Batarseh

February 17th, 2020

The use of robots and artificial intelligence in war

4 comments | 75 shares

Estimated reading time: 5 minutes

Abhinav Kumar

Feras A. Batarseh

February 17th, 2020

The use of robots and artificial intelligence in war

4 comments | 75 shares

Estimated reading time: 5 minutes

The nature of war has changed significantly since the end of World War 2. Battle lines have now become opaque; an attack can come from terrorists who easily blend in as civilians, drones that are undetectable to the eye, or ballistic missiles launched from 500 miles away. To account for the increased lethality of war, a vast budget is required to maintain an active duty army. Simply recruiting a soldier costs $15,000; the cost of treating an injured US solider is about $2 million a year (Bilmes, 2014). We aim to determine whether it is possible to reduce those human costs through the deployment of computational agents and artificial intelligence (AI).

Robots that can understand their context, and deal with novelty in an open world could be a viable solution. In the next big war, AI will be king! Robots that are quicker, stronger, and more accurate will determine who the victor is. Such wars, however, can be unquestionably and unprecedently destructive, maybe to the extent of a global technological meltdown. Nonetheless, such advanced technologies should not fall into the hands of the immoral or the malevolent.

The use of robotics (or other forms of intelligent agents) in warfare has existed since WW2. Many of these early robots (such as the US’s ‘Aphrodite’ drones or the Soviet’s tele-tanks) were either ineffective or only useful for specialised operations. The use of robotics for military operations took off in the 1990s when the MQB-1 Predator drone was used by the Central Intelligence Agency (CIA). Instead of painstakingly controlling drones from close-up radio signals, drones “could be controlled by satellite from any command centre to bridge intelligence gaps” (Gotera, 2003). Although there has been considerable progress in making robots intelligent, autonomous robots still “lack the flexibility to react appropriately in unforeseen situations” (Nitsch, 2013). If autonomous robots are used in the battlefield, soldiers will acquire more responsibilities, not less. Soldiers will be expected to perform normal military tasks and use robotic assets as per mission requirements (Barnes et al., 2009).

Studies show that a soldier’s performance on the battlefield worsens if a second robotic task is added to their inventory. In a simulation experiment conducted by Chen and Joyner (Chen et al., 2009), researchers tested if gunners were able to maintain local security using a semi-autonomous unmanned ground vehicle (UGV). The results indicated that “a gunner’s target detection performance degraded significantly when he or she had to concurrently monitor, manage, or tele-operate a UGV compared to the baseline (gunnery task only)” (Barnes et al., 2009). Researchers tested the ability of participants to identify equipment and personnel while operating a UGV. Authors of the study ran their experiments in a 1/35 scaled Iraqi city. The results similarly indicated that “having an additional semiautonomous robot did not offer any advantages for participants. In some cases, it added to their difficulties”.

These types of studies indicate that having context (also known as situational awareness —SA—in the military) is essential for intelligent computational agents. Defining context for an agent, however, is far more challenging than automating a manual or repetitive task. One of the biggest challenges for developing validated and verified SA is defining objects that are unfamiliar. Often, military operators identified an improvised explosive device (or other type of dangers) only because something didn’t look right. Operators rely on previous experiences to identify subtle cues like inconsistent soil texture or a missing vehicle. If an agent can similarly assist soldiers in identifying dangers via contextual data, robotic agents will be far more effective in the battlefield. We present soldier variables that are considered as conflict contextual constructs, namely: paralinguistic, demographic, visual, and physiological variables.

Paralinguistic variables: Military operations typically occur under noisy and stressful circumstances; speech recognition is highly inaccurate under these conditions. Studies consistently show speech recognition accuracies to drop by 50-60% for speech that is emotional (Rajasekaran et al., 1986). Focusing on acoustic (or paralinguistic) features, however, reduces complexities that commonly plague linguistic features, such as: speaking different languages and dialects, having varying accents, and using different speeds of speech. Some studies indicate that using acoustic variables like speech intensity, pitch, speaking rate, and spectral features (i.e. Mel frequency) are effective at recognising emotions from speech — even under noisy conditions (Tawari et al., 2010).

Demographic variables: Research has shown that accepting robots as partners is enhanced when robots possess human-like characteristics (Kiesler et al., 2008). Examples of soldiers ascribing feelings like empathy towards their robotic tools have already been found. Therefore, it is critical to understand how robotic agents will interact with human teammates in future human-agent teams. Using metrics like personality type, age, and gender, agents could help determine whether a soldier is overreacting to a situation or playing it too safe. Research into using personality tests, demographics, geographical data, and psychographics for context-based purposes is a field that is recently gaining more traction (Batarseh et al., 2018). In a study one of us conducted (Batarseh et al. 2018), we used context-aware user interfaces to build an intelligent emergency application. The study used the Myers-Briggs personality type indication to ‘know’ who the user is and what aspects identify their personality. Individuals with a ‘judger’ personality were provided with a more structured format for the app; ‘perceivers’ were presented with a ‘dynamic’ format and so on (Batarseh et al., 2017). In a military setting, an agent could similarly use personality metrics to personalise its interaction with soldiers.

Visual variables: In real world scenarios, object information is usually degraded due to peripheral vision and distance, shadows, and illumination. Within the computer vision field, a dominant approach towards scene recognition is object-based (Siagian et al., 2005). This approach works well indoors, but for outdoors “spatial simplifications such as flat walls and narrow corridors cannot be assumed” (Siagian et al., 2005). A context-based approach could analyse the scene as a whole and embody a sense of gist. Rather than focusing exclusively on objects, classifying image data as scenes would average out random noise in isolated regions. The structure of an area could be estimated by global image features, rather than relying exclusively on local image structures. Not only would this approach be more computationally efficient, but in certain studies it has also proven to be more accurate (Siagian et al., 2005). For various reasons, analysing image data as scenes is more applicable in military settings than the object-based approach. In the battlefield, video clips of an agent will not always be stable. Agents will have to parse thorough jittery image data (as a result of explosions, gunfire, etc.) and identify the context. Additionally, agents will be expected to quickly identify a scene despite having low spatial frequency.

Physiological variables: An agent can rely on soldiers’ emotions to determine if a situation requires intervention. If a squad of soldiers is clearly distressed, the agent needs to act. Monitoring the mean heart rate, heart rate variability, voice pitch, skin conductance, skin temperature, and respiratory rate are commonly used physiological markers of stress. Most studies attempt to predict the mental stress of working professionals using heart rate data (Wijsman et al., 2011). Few studies have focused on predicting physical and emotional stress in combat or physically involved settings (Kutilek, et al., 2017). Effectively predicting and monitoring the stress levels of a soldier is important to build a context that is relevant to the health of the soldier.

Context- driven intelligent agent (CIA) is a method that unifies all four soldier variables. CIA is a technique that allows agents to understand their environment and reduce uncertainty. Context is the mathematical aggregation of all these pillars (figure 1).

Figure 1. A theoretical agent reacting to uncertainty in a soldier-agent mission

To demonstrate how demographic variables could be used for contextual purposes, we collected a dataset from www.data.gov on veteran health conditions (made available by the Department of Veterans Affairs). The dataset contained multiple health and demographic features amongst VHA (Veteran Health Administration) patients in 2013. The dataset includes data on health issues by race (Asian, black, Hispanic, white, Native American, and Pacific Islander). Within each race, soldiers with general health issues are Asian (980,433), black (21,870,275), Hispanic (8,506,126), white (98,758,347), Native American (742,994), and Pacific Islander (868,330). The top three conditions veterans suffer from are hypertension, lipid disorders, and diabetes (figure 2). While genetics and age are contributing factors, lifestyle and diet are strongly correlated to whether a person will suffer from these conditions. Soldiers often “cope with the stress of battle by smoking, drinking, and eating unhealthy” (Deal, 2011). Hypertension negatively affects military readiness (Department of Defense, 2017). Agent intervention during stressful combat settings could significantly reduce the medical effort required to keep a soldier healthy.

Figure 2. Top medical issues veterans suffer from by race

Although veterans of all races suffer from various medical conditions, not all groups suffer from medical diseases uniformly. In 2013, black soldiers made up 11% of the veteran population, yet they made up 20% of all PTSD cases amongst veterans. Hispanics consisted of 8% of all PTSD cases yet represented 6.3% of all veterans in 2013. The heat-map (figure 3) also reveals differing percentages for medical conditions by race. Accounting for possible biases in healthcare outcomes should be an important consideration for agents – especially as the percentage of minorities in the military increases in the future. Physiological variables (such as stress and hypertension) affect sensor readings and other contextual constructs that create CIA outputs. For example, infrared sensors can detect high blood pressure; so if the soldier has hypertension, the agent can factor that into the context.

Traditionally, the creation of intelligent agents has focused on incorporating autonomous features without contextual intelligence. Not considering contextual intelligence has resulted in robots with sophisticated features (i.e. facial recognition) but with limited practical usage. Despite having advanced autonomous features, military forces “still perceive of robots as RC [remote-control] cars” (Singer et al., 2009). This opinion is not without cause; in many studies, participants who controlled robots under automated navigation exhibited significantly poorer SA than participants who used manual control.

Figure 3. Heat-map of soldiers’ health percentages

 

By presenting this article, we hope to push the narrative of integrating contextual intelligence for military agents. The presented method allocates patterns of context and leads to improved soldiers’ health. The holistic goal is to empower the soldier and increase the safety of military missions that eradicate evil around the world. It is most certain that wars of the future (and WW3) will be fought using AI, but that might also mean that WW4 will be fought using sticks and stones.

Authors’ note: For further reading on contextual data analytics, open data, and AI applications, please refer to the book Data Democracy.

♣♣♣

Notes:

  • This blog post expresses the views of its author(s), not the position of LSE Business Review or the London School of Economics.
  • Featured image by Defence-Imagery, under a Pixabay licence
  • When you leave a comment, you’re agreeing to our Comment Policy

Abhinav Kumar is post-graduate student of computer science at George Mason University (GMU), Fairfax, Virginia. His research spans the areas of robotics, deep learning, and data systems. Abhinav has developed multiple data-driven systems and published several academic papers. He is an active member of Turing Research at GMU.

 

Feras A. Batarseh is a research assistant professor with the College of Science at George Mason University (GMU), Fairfax, VA. His research spans the areas of data science, artificial intelligence, and context-aware software systems. Dr Batarseh obtained his PhD and MSc in computer engineering from the University of Central Florida (UCF) (2007, 2011) and a graduate certificate in project leadership from Cornell University (2016). He taught data science at multiple universities including George Washington U, Georgetown U, and University of Maryland, Baltimore County. His research work has been published at various prestigious journals and international conferences. Additionally, Dr Batarseh published and edited several book chapters. He is the author and editor of Federal Data Science, and Data Democracy; both books by Elsevier’s Academic Press. For more on the Turing research group, refer to their website. For more on Dr. Batarseh, refer to his page.

 

About the author

Abhinav Kumar

Abhinav Kumar is an M.Sc. student of computer science at George Mason University (GMU), Fairfax, Virginia. His research spans the areas of robotics, deep learning, and data systems. Abhinav has developed multiple data-driven systems and published several academic papers. He is an active member of Turing Research at GMU.

Feras A. Batarseh

Feras A. Batarseh is a research assistant professor with the College of Science at George Mason University (GMU), Fairfax, VA. His research spans the areas of data science, artificial intelligence, and context-aware software systems. Dr Batarseh obtained his PhD and MSc in computer engineering from the University of Central Florida (UCF) (2007, 2011) and a graduate certificate in project leadership from Cornell University (2016). He taught data science at multiple universities including George Washington U, Georgetown U, and University of Maryland, Baltimore County. His research work has been published at various prestigious journals and international conferences. Additionally, Dr Batarseh published and edited several book chapters. He is the author and editor of Federal Data Science, and Data Democracy; both books by Elsevier’s Academic Press. For more on the Turing research group, refer to this page: http://turing.cos.gmu.edu/ For more on Dr. Batarseh, refer to his page: http://www.ferasbatarseh.com/

Posted In: Technology

4 Comments