Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Jacob Ward. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Ward or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Isn’t Just a Money Risk Anymore — It’s Bigger than That

10:47
 
Share
 

Manage episode 524422291 series 3662679
Content provided by Jacob Ward. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Ward or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.

This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.

There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.

Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.

That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.

What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.

At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.

We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.

Mentioned in This Article:

Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardian

https://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warns

Islamic State group and other extremists are turning to AI | AP News

https://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5

‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardian

https://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-media

US state attorneys-general demand better AI safeguards

https://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59c

  continue reading

46 episodes

Artwork
iconShare
 
Manage episode 524422291 series 3662679
Content provided by Jacob Ward. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Ward or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.

This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.

There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.

Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.

That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.

What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.

At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.

We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.

Mentioned in This Article:

Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardian

https://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warns

Islamic State group and other extremists are turning to AI | AP News

https://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5

‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardian

https://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-media

US state attorneys-general demand better AI safeguards

https://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59c

  continue reading

46 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play