Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by The Army Mad Scientist Initiative. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Army Mad Scientist Initiative or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

116. Do Androids Dream of Electric War: The Reality of Autonomous Weapons with Dr. Mark Bailey

25:00
 
Share
 

Manage episode 502872779 series 2995592
Content provided by The Army Mad Scientist Initiative. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Army Mad Scientist Initiative or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

“I think there’s a moral question that one has to ask in general about whether it’s appropriate for a machine to make a decision as to whether or not a human ought to live or die”

[Editor’s Note: As observed in TRADOC Pamphlet 525-92, The Operational Environment 2024-2034: Large-Scale Combat Operations:

“The increase in the production, employment, and success of uncrewed systems means the Army can expect to encounter these systems across the breadth and depth of LSCO.”

Contemporary conflicts in Ukraine and Middle East have witnessed the burgeoning use of autonomous weapons — empowering lesser states (i.e., Ukraine) and non-state actors (i.e., the Houthi Movement in Yemen) to conduct asymmetric strikes against nations with more robust military capabilities (i.e., Russia and Israel, respectively). These capabilities are transforming warfighting in both the air/land and land/sea littoral, eroding and possibly negating traditional concepts of air and naval superiority. The battlefield successes achieved using these autonomous technologies has led to them being rapidly proliferated around the globe, with Transnational Criminal Organizations (TCO) like the Jalisco New Generation Cartel (CJNG) effectively employing armed Unmanned Aerial Vehicles (UAVs) against both their criminal competitors and the Mexican authorities alike.

In the ongoing race to develop more effective (read lethal) combat systems capable of overcoming kinetic and electromagnetic countermeasures, some nations are integrating Artificial Intelligence (AI) and Machine Vision (MV) with Lethal Autonomous Weapons Systems (LAWS) — in essence removing human operators from within or on the OODA loop. U.S. policy on LAWS is documented in DoD Directive 3000.09, Autonomy in Weapon Systems, which includes the following statement:

“Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Per the U.S. Congress’s Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems:

“U.S. policy does not prohibit the development or employment of LAWS. Although the United States is not known to currently have LAWS in its inventory, some senior military and defense leaders have stated that the United States may be compelled to develop LAWS if U.S. competitors choose to do so. At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.”

Today’s episode of The Convergence podcast features Dr. Mark Bailey, Department Chair, Cyber Intelligence and Data Science, National Intelligence University, exploring the tension that exists between the rapid convergence of AI and battlefield autonomy and our national values requiring transparency and oversight in our use of lethal force. With this tension, there is also an associated asymmetry in ethics — our adversaries are racing ahead with their plans to harness the power of AI on the battlefield. Military thinkers within the People’s Liberation Army (PLA) embrace its prospects as a leapfrog technology that could allow China to skip technological development stages and rapidly overmatch U.S. capabilities. Russia’s Vladimir Putin proclaimedArtificial intelligence is the future not only of Russia but of all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.” Read on to learn more about the implications of LAWS in the Operational Environment!]

Dr. Mark Bailey writes about the intersection between artificial intelligence, complexity, and national security. He is an associate professor at the National Intelligence University, where he is the Department Chair for Cyber Intelligence and Data Science, as well as the Director of the Biological and Computational Intelligence Center. His work has appeared in publications such as the journal Futures, Nautilus, and Homeland Security Today, and he was named to Homeland’s 50 Trailblazers of 2023. Previously, he worked as a data scientist on several AI programs in the U.S. Department of Defense and the Intelligence Community. He is also an Officer in the U.S. Army Reserve.

In our latest episode of The Convergence podcast, Army Mad Scientist sat down with Dr. Bailey to discuss his thoughts on AI and autonomous weapons, how their rise is impacting the U.S. Army, and how our adversaries may be poised to use them against us. The following bullet points highlight key insights from our conversation:

  • Contemporary AI systems that rely on large quantities of parameters present a challenge when attempting to decipher their “thought process” for making predictions – the “black box” issue. The military acquisition system is predicated on the notion that technology will perform reliably and predictably in different types of operational environments. If AI systems cannot be aligned to human expectations, it becomes much more difficult to control which is a critical aspect of military applications, such as LAWS.
  • When addressing the link between AI and lethal weapons, a larger discussion becomes whether it’s appropriate for a machine to decide whether a human ought to live or die. Removing this aspect of humanity or cost of war makes way for a much more brutal battlefield.
  • Overwhelmingly, the speed at which technology is developed far outpaces our ability to reflect on its appropriate use. Updating the military’s acquisition process to respect the uncertainty around AI systems will support an improved defense innovation structure that can successfully leverage these emerging technologies. Our acquisition process must account for aspects of AI, such as explainability and alignment, to ensure its application is effective and suitable.
  • According to the AI community, artificial general intelligence (AGI) – AI that is cognitively equivalent to a human in all areas – is on the horizon, leading to artificial super intelligence (ASI) – AI that far exceeds human capability. The line between weak AI systems that do one thing well and this more general type of intelligence will continue to be blurred with significant improvements to Large Language Models (LLMs) that fall in the middle.
  • Adversaries will likely view the morality of using LAWS differently than the U.S. — we must be prepared for this asymmetry of ethics in planning to achieve strategic outcomes. Ideally, a global consensus on the appropriate use of AI in military applications, similar to that of nuclear weapons, will act as a deterrent. Ironically, with the democratization of AI technology, nuclear components are much easier to interdict, creating an additional challenge for the global community.
  • The integration of AI into lethal military applications, such as autonomous weapons, is happening now. The Operational Environment is becoming much more dangerous, creating a challenge the U.S. Army must grapple with, while remaining true to our national values and ethical standards.

Stay tuned to the Mad Scientist Laboratory for our next insightful episode of The Convergence on 11 September 2025, when we sit down with Luke Miller, Director of the College of William and Mary’s Wargaming Lab, to discuss the university’s on-going wargaming projects with the DoD, his thoughts on wargame design and education in the military, and the future of wargaming.

If you enjoyed this post, check out the TRADOC Pamphlet 525-92, The Operational Environment 2024-2034: Large-Scale Combat Operations

Explore the TRADOC G-2‘s Operational Environment Enterprise web page, brimming with authoritative information on the Operational Environment and how our adversaries fight, including:

Our China Landing Zone, full of information regarding our pacing challenge, including ATP 7-100.3, Chinese Tactics, How China Fights in Large-Scale Combat Operations, BiteSize China weekly topics, and the People’s Liberation Army Ground Forces Quick Reference Guide.

Our Russia Landing Zone, including the BiteSize Russia weekly topics. If you have a CAC, you’ll be especially interested in reviewing our weekly RUS-UKR Conflict Running Estimates and associated Narratives, capturing what we learned about the contemporary Russian way of war in Ukraine over the past two years and the ramifications for U.S. Army modernization across DOTMLPF-P.

Our Iran Landing Zone, including the Iran Quick Reference Guide and the Iran Passive Defense Manual (both require a CAC to access).

Our North Korea Landing Zone, including Resources for Studying North Korea, Instruments of Chinese Military Influence in North Korea, and Instruments of Russian Military Influence in North Korea.

Our Irregular Threats Landing Zone, including TC 7-100.3, Irregular Opposing Forces, and ATP 3-37.2, Antiterrorism (requires a CAC to access).

Our Running Estimates SharePoint site (also requires a CAC to access) — documenting what we’re learning about the evolving OE. Contains our monthly OE Running Estimates, associated Narratives, and the quarterly OE Assessment TRADOC Intelligence Posts (TIPs).

Then review the following related TRADOC G-2 and Mad Scientist Laboratory content:

Adaptation… Ukraine Conflict’s UAV Evolution, by Colin Christopher

Thoughts on AI and Ethics… from the Chaplain Corps, by Dr. Nathan White

On the Ground and In the Air in Ukraine, and associated podcast with Wolfgang Hagarty

Insights from Ukraine on the Operational Environment and the Changing Character of Warfare

Learning from LSCO: Applying Lessons to Irregular Conflict, by Ian Sullivan and Kate Kilgore

Asymmetric Warfare across Multiple Domains, by Ethan Sah

Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas

Own the Night,” as well as Former Deputy Secretary of Defense and proclaimed Mad Scientist Mr. Bob Work‘s presentation from the Disruption and the Future Operational Environment Conference on AI and Future Warfare: The Rise of the Robots (and Army Futures Command), and his Modern War Institute podcast assessing the future battlefield.

Unmanned Capabilities in Today’s Battlespace

Revolutionizing 21st Century Warfighting: UAVs and C-UAS

Death From Above! The Evolution of sUAS Technology and associated podcast, with COL Bill Edwards (USA-Ret.)

The Operational Environment’s Increased Lethality

Top Attack: Lessons Learned from the Second Nagorno-Karabakh War and associated podcast, with proclaimed Mad Scientist COL John Antal (USA-Ret.)

Jomini’s Revenge: Mass Strikes Back! by proclaimed Mad Scientist Zachery Tyson Brown

Insights from the Robotics and Autonomy Series of Virtual Events, as well as all of the associated webinar content (presenter biographies, slide decks, and notes) and associated videos

Through Soldiers’ Eyes: The Future of Ground Combat and its associated podcast

“Intelligentization” and a Chinese Vision of Future War

The PLA and UAVs – Automating the Battlefield and Enhancing Training

A Chinese Perspective on Future Urban Unmanned Operations

China: “New Concepts” in Unmanned Combat and Cyber and Electronic Warfare

The PLA: Close Combat in the Information Age and the “Blade of Victory”

“Once More unto The Breach Dear Friends”: From English Longbows to Azerbaijani Drones, Army Modernization STILL Means More than Materiel, by Ian Sullivan.

Rapid Adaptation

Turkey and the TB-2: A Rising Drone Superpower and its associated podcast, with Karen Kaya

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the U.S. Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

  continue reading

153 episodes

Artwork
iconShare
 
Manage episode 502872779 series 2995592
Content provided by The Army Mad Scientist Initiative. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Army Mad Scientist Initiative or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

“I think there’s a moral question that one has to ask in general about whether it’s appropriate for a machine to make a decision as to whether or not a human ought to live or die”

[Editor’s Note: As observed in TRADOC Pamphlet 525-92, The Operational Environment 2024-2034: Large-Scale Combat Operations:

“The increase in the production, employment, and success of uncrewed systems means the Army can expect to encounter these systems across the breadth and depth of LSCO.”

Contemporary conflicts in Ukraine and Middle East have witnessed the burgeoning use of autonomous weapons — empowering lesser states (i.e., Ukraine) and non-state actors (i.e., the Houthi Movement in Yemen) to conduct asymmetric strikes against nations with more robust military capabilities (i.e., Russia and Israel, respectively). These capabilities are transforming warfighting in both the air/land and land/sea littoral, eroding and possibly negating traditional concepts of air and naval superiority. The battlefield successes achieved using these autonomous technologies has led to them being rapidly proliferated around the globe, with Transnational Criminal Organizations (TCO) like the Jalisco New Generation Cartel (CJNG) effectively employing armed Unmanned Aerial Vehicles (UAVs) against both their criminal competitors and the Mexican authorities alike.

In the ongoing race to develop more effective (read lethal) combat systems capable of overcoming kinetic and electromagnetic countermeasures, some nations are integrating Artificial Intelligence (AI) and Machine Vision (MV) with Lethal Autonomous Weapons Systems (LAWS) — in essence removing human operators from within or on the OODA loop. U.S. policy on LAWS is documented in DoD Directive 3000.09, Autonomy in Weapon Systems, which includes the following statement:

“Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Per the U.S. Congress’s Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems:

“U.S. policy does not prohibit the development or employment of LAWS. Although the United States is not known to currently have LAWS in its inventory, some senior military and defense leaders have stated that the United States may be compelled to develop LAWS if U.S. competitors choose to do so. At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.”

Today’s episode of The Convergence podcast features Dr. Mark Bailey, Department Chair, Cyber Intelligence and Data Science, National Intelligence University, exploring the tension that exists between the rapid convergence of AI and battlefield autonomy and our national values requiring transparency and oversight in our use of lethal force. With this tension, there is also an associated asymmetry in ethics — our adversaries are racing ahead with their plans to harness the power of AI on the battlefield. Military thinkers within the People’s Liberation Army (PLA) embrace its prospects as a leapfrog technology that could allow China to skip technological development stages and rapidly overmatch U.S. capabilities. Russia’s Vladimir Putin proclaimedArtificial intelligence is the future not only of Russia but of all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.” Read on to learn more about the implications of LAWS in the Operational Environment!]

Dr. Mark Bailey writes about the intersection between artificial intelligence, complexity, and national security. He is an associate professor at the National Intelligence University, where he is the Department Chair for Cyber Intelligence and Data Science, as well as the Director of the Biological and Computational Intelligence Center. His work has appeared in publications such as the journal Futures, Nautilus, and Homeland Security Today, and he was named to Homeland’s 50 Trailblazers of 2023. Previously, he worked as a data scientist on several AI programs in the U.S. Department of Defense and the Intelligence Community. He is also an Officer in the U.S. Army Reserve.

In our latest episode of The Convergence podcast, Army Mad Scientist sat down with Dr. Bailey to discuss his thoughts on AI and autonomous weapons, how their rise is impacting the U.S. Army, and how our adversaries may be poised to use them against us. The following bullet points highlight key insights from our conversation:

  • Contemporary AI systems that rely on large quantities of parameters present a challenge when attempting to decipher their “thought process” for making predictions – the “black box” issue. The military acquisition system is predicated on the notion that technology will perform reliably and predictably in different types of operational environments. If AI systems cannot be aligned to human expectations, it becomes much more difficult to control which is a critical aspect of military applications, such as LAWS.
  • When addressing the link between AI and lethal weapons, a larger discussion becomes whether it’s appropriate for a machine to decide whether a human ought to live or die. Removing this aspect of humanity or cost of war makes way for a much more brutal battlefield.
  • Overwhelmingly, the speed at which technology is developed far outpaces our ability to reflect on its appropriate use. Updating the military’s acquisition process to respect the uncertainty around AI systems will support an improved defense innovation structure that can successfully leverage these emerging technologies. Our acquisition process must account for aspects of AI, such as explainability and alignment, to ensure its application is effective and suitable.
  • According to the AI community, artificial general intelligence (AGI) – AI that is cognitively equivalent to a human in all areas – is on the horizon, leading to artificial super intelligence (ASI) – AI that far exceeds human capability. The line between weak AI systems that do one thing well and this more general type of intelligence will continue to be blurred with significant improvements to Large Language Models (LLMs) that fall in the middle.
  • Adversaries will likely view the morality of using LAWS differently than the U.S. — we must be prepared for this asymmetry of ethics in planning to achieve strategic outcomes. Ideally, a global consensus on the appropriate use of AI in military applications, similar to that of nuclear weapons, will act as a deterrent. Ironically, with the democratization of AI technology, nuclear components are much easier to interdict, creating an additional challenge for the global community.
  • The integration of AI into lethal military applications, such as autonomous weapons, is happening now. The Operational Environment is becoming much more dangerous, creating a challenge the U.S. Army must grapple with, while remaining true to our national values and ethical standards.

Stay tuned to the Mad Scientist Laboratory for our next insightful episode of The Convergence on 11 September 2025, when we sit down with Luke Miller, Director of the College of William and Mary’s Wargaming Lab, to discuss the university’s on-going wargaming projects with the DoD, his thoughts on wargame design and education in the military, and the future of wargaming.

If you enjoyed this post, check out the TRADOC Pamphlet 525-92, The Operational Environment 2024-2034: Large-Scale Combat Operations

Explore the TRADOC G-2‘s Operational Environment Enterprise web page, brimming with authoritative information on the Operational Environment and how our adversaries fight, including:

Our China Landing Zone, full of information regarding our pacing challenge, including ATP 7-100.3, Chinese Tactics, How China Fights in Large-Scale Combat Operations, BiteSize China weekly topics, and the People’s Liberation Army Ground Forces Quick Reference Guide.

Our Russia Landing Zone, including the BiteSize Russia weekly topics. If you have a CAC, you’ll be especially interested in reviewing our weekly RUS-UKR Conflict Running Estimates and associated Narratives, capturing what we learned about the contemporary Russian way of war in Ukraine over the past two years and the ramifications for U.S. Army modernization across DOTMLPF-P.

Our Iran Landing Zone, including the Iran Quick Reference Guide and the Iran Passive Defense Manual (both require a CAC to access).

Our North Korea Landing Zone, including Resources for Studying North Korea, Instruments of Chinese Military Influence in North Korea, and Instruments of Russian Military Influence in North Korea.

Our Irregular Threats Landing Zone, including TC 7-100.3, Irregular Opposing Forces, and ATP 3-37.2, Antiterrorism (requires a CAC to access).

Our Running Estimates SharePoint site (also requires a CAC to access) — documenting what we’re learning about the evolving OE. Contains our monthly OE Running Estimates, associated Narratives, and the quarterly OE Assessment TRADOC Intelligence Posts (TIPs).

Then review the following related TRADOC G-2 and Mad Scientist Laboratory content:

Adaptation… Ukraine Conflict’s UAV Evolution, by Colin Christopher

Thoughts on AI and Ethics… from the Chaplain Corps, by Dr. Nathan White

On the Ground and In the Air in Ukraine, and associated podcast with Wolfgang Hagarty

Insights from Ukraine on the Operational Environment and the Changing Character of Warfare

Learning from LSCO: Applying Lessons to Irregular Conflict, by Ian Sullivan and Kate Kilgore

Asymmetric Warfare across Multiple Domains, by Ethan Sah

Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas

Own the Night,” as well as Former Deputy Secretary of Defense and proclaimed Mad Scientist Mr. Bob Work‘s presentation from the Disruption and the Future Operational Environment Conference on AI and Future Warfare: The Rise of the Robots (and Army Futures Command), and his Modern War Institute podcast assessing the future battlefield.

Unmanned Capabilities in Today’s Battlespace

Revolutionizing 21st Century Warfighting: UAVs and C-UAS

Death From Above! The Evolution of sUAS Technology and associated podcast, with COL Bill Edwards (USA-Ret.)

The Operational Environment’s Increased Lethality

Top Attack: Lessons Learned from the Second Nagorno-Karabakh War and associated podcast, with proclaimed Mad Scientist COL John Antal (USA-Ret.)

Jomini’s Revenge: Mass Strikes Back! by proclaimed Mad Scientist Zachery Tyson Brown

Insights from the Robotics and Autonomy Series of Virtual Events, as well as all of the associated webinar content (presenter biographies, slide decks, and notes) and associated videos

Through Soldiers’ Eyes: The Future of Ground Combat and its associated podcast

“Intelligentization” and a Chinese Vision of Future War

The PLA and UAVs – Automating the Battlefield and Enhancing Training

A Chinese Perspective on Future Urban Unmanned Operations

China: “New Concepts” in Unmanned Combat and Cyber and Electronic Warfare

The PLA: Close Combat in the Information Age and the “Blade of Victory”

“Once More unto The Breach Dear Friends”: From English Longbows to Azerbaijani Drones, Army Modernization STILL Means More than Materiel, by Ian Sullivan.

Rapid Adaptation

Turkey and the TB-2: A Rising Drone Superpower and its associated podcast, with Karen Kaya

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the U.S. Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

  continue reading

153 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play