Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Wavell Room. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Wavell Room or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Bullshit & Botshit: Digital Sycophancy & Analogue Deference in Defence

15:15
 
Share
 

Manage episode 510277369 series 2598538
Content provided by Wavell Room. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Wavell Room or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
The recently published Strategic Defence Review (SDR)1 and National Security Strategy (NSS)2 both place accelerating development and adoption of automation and Artificial Intelligence (AI) at the heart of their bold new vision for Defence.
I've written elsewhere3 about the broader ethical implications,4 but want here to turn attention to the 'so what?', and particularly the 'now what?' Specifically, I'd like to explore a question SDR itself raises, of "Artificial Intelligence and autonomy reach[ing] the necessary levels of capability and trust" (emphasis added). What do we actually mean by this, what is the risk, and how might we go about addressing it?
The proliferation of AI, particularly Large Language Models (LLMs), promises a revolution in efficiency and analytical capability.5 For Defence, the allure of leveraging AI to accelerate the 'OODA loop' (Observe, Orient, Decide, Act) and maintain decision advantage is undeniable.
Yet, as the use of these tools becomes more widespread, a peculiar and potentially hazardous flaw is becoming increasingly and undeniably apparent: their propensity to 'hallucinate' - to generate plausible, confident, yet entirely fabricated and, importantly, false information.6 The resulting 'botshit'7 presents a novel technical, and ethical, challenge.
It also finds a powerful and troubling analogue in a problem that has long plagued hierarchical organisations, and which UK Defence has particularly wrestled with: the human tendency for subordinates to tell their superiors what they believe those superiors want to hear.8 Of particular concern in this context, this latter does not necessarily trouble itself with whether that report is true or not, merely that it is what is felt to be required; such 'bullshit'9 10 is thus subtly but importantly
different from 'lying', and seemingly more akin therefore to its digital cousin.
I argue however that while 'botshit' and 'bullshit' produce deceptively similar outputs - confidently delivered, seemingly authoritative falsehoods, that arise not from aversion to the truth, but (relative) indifference to it, and that may corrupt judgement - their underlying causes, and therefore their respective treatments, are fundamentally different. Indeed, this distinction was demonstrated with startling clarity during the research for this very paper.
Mistaking one for the other, and thereby applying the wrong corrective measures, poses a significant threat to strategic thinking and direction, and Operational Effectiveness. By understanding the distinct origins of machine-generated 'botshit' and human-generated 'bullshit', we can develop more robust and effective approaches to the envisioned future of hybrid human-machine decision-making.
A familiar flaw: human deference and organisational culture
The Chilcot Inquiry11 served as a stark reminder of how easily institutional culture can undermine sound policy.
In his introductory statement to the report,12 Sir Chilcot noted that "policy […] was made on the basis of flawed intelligence and assessments", but more to the point that "judgements […] were presented with a certainty that was not justified" and that "they were not challenged, and they should have been." He further emphasised "the importance of […] discussion which encourages frank and informed debate and challenge" and that "above all, the lesson is that all aspects […] need to be calculated,
debated and challenged with the utmost rigour." He was saying, very clearly and repeatedly, that this was not simply a failure of intelligence collection or strategic calculation; it was a failure of culture.
The decision-making process exposed an environment where prevailing assumptions went untested and the conviction of senior leaders created a gravitational pull, warping the information presented to them to fit a desired narrative.
Chilcot highlights how an environment in which decisions are based on eminence (also eloquence and vehemence) rather than evidence13 encourages t...
  continue reading

86 episodes

Artwork
iconShare
 
Manage episode 510277369 series 2598538
Content provided by Wavell Room. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Wavell Room or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
The recently published Strategic Defence Review (SDR)1 and National Security Strategy (NSS)2 both place accelerating development and adoption of automation and Artificial Intelligence (AI) at the heart of their bold new vision for Defence.
I've written elsewhere3 about the broader ethical implications,4 but want here to turn attention to the 'so what?', and particularly the 'now what?' Specifically, I'd like to explore a question SDR itself raises, of "Artificial Intelligence and autonomy reach[ing] the necessary levels of capability and trust" (emphasis added). What do we actually mean by this, what is the risk, and how might we go about addressing it?
The proliferation of AI, particularly Large Language Models (LLMs), promises a revolution in efficiency and analytical capability.5 For Defence, the allure of leveraging AI to accelerate the 'OODA loop' (Observe, Orient, Decide, Act) and maintain decision advantage is undeniable.
Yet, as the use of these tools becomes more widespread, a peculiar and potentially hazardous flaw is becoming increasingly and undeniably apparent: their propensity to 'hallucinate' - to generate plausible, confident, yet entirely fabricated and, importantly, false information.6 The resulting 'botshit'7 presents a novel technical, and ethical, challenge.
It also finds a powerful and troubling analogue in a problem that has long plagued hierarchical organisations, and which UK Defence has particularly wrestled with: the human tendency for subordinates to tell their superiors what they believe those superiors want to hear.8 Of particular concern in this context, this latter does not necessarily trouble itself with whether that report is true or not, merely that it is what is felt to be required; such 'bullshit'9 10 is thus subtly but importantly
different from 'lying', and seemingly more akin therefore to its digital cousin.
I argue however that while 'botshit' and 'bullshit' produce deceptively similar outputs - confidently delivered, seemingly authoritative falsehoods, that arise not from aversion to the truth, but (relative) indifference to it, and that may corrupt judgement - their underlying causes, and therefore their respective treatments, are fundamentally different. Indeed, this distinction was demonstrated with startling clarity during the research for this very paper.
Mistaking one for the other, and thereby applying the wrong corrective measures, poses a significant threat to strategic thinking and direction, and Operational Effectiveness. By understanding the distinct origins of machine-generated 'botshit' and human-generated 'bullshit', we can develop more robust and effective approaches to the envisioned future of hybrid human-machine decision-making.
A familiar flaw: human deference and organisational culture
The Chilcot Inquiry11 served as a stark reminder of how easily institutional culture can undermine sound policy.
In his introductory statement to the report,12 Sir Chilcot noted that "policy […] was made on the basis of flawed intelligence and assessments", but more to the point that "judgements […] were presented with a certainty that was not justified" and that "they were not challenged, and they should have been." He further emphasised "the importance of […] discussion which encourages frank and informed debate and challenge" and that "above all, the lesson is that all aspects […] need to be calculated,
debated and challenged with the utmost rigour." He was saying, very clearly and repeatedly, that this was not simply a failure of intelligence collection or strategic calculation; it was a failure of culture.
The decision-making process exposed an environment where prevailing assumptions went untested and the conviction of senior leaders created a gravitational pull, warping the information presented to them to fit a desired narrative.
Chilcot highlights how an environment in which decisions are based on eminence (also eloquence and vehemence) rather than evidence13 encourages t...
  continue reading

86 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play