Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by David Provan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by David Provan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ep. 132: How much should we worry about the invasiveness of team support AI?

41:22
 
Share
 

Manage episode 506495039 series 2571262
Content provided by David Provan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by David Provan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

The findings reveal that invasiveness drives negative reactions more than the stated purpose of monitoring, with participants showing skepticism about AI's ability to accurately measure teamwork quality. The hosts emphasize that even well-intentioned monitoring systems introduce psychosocial hazards and stress, requiring organizations to carefully balance potential benefits against worker well-being impacts when implementing AI-powered team support systems.

Discussion Points:

  • (00:00) AI team monitoring invasiveness, AI workplace surveillance
  • (03:16) Research paper introduction and authors from Germany
  • (04:48) Previous research on electronic monitoring and employee reactions
  • (08:49) Study one methodology with invasiveness levels and video scenarios
  • (12:12) Study two simulation with teams designing fitness trackers
  • (17:46) Study one findings on invasiveness and fairness perceptions
  • (24:07) Study two results and participant skepticism about monitoring accuracy
  • (32:48) Practical takeaways on invasiveness levels and stakeholder management
  • (40:42) Final recommendations on balancing benefits and invasiveness
  • Like and follow, send us your comments and suggestions for future show topics!

Quotes:

Drew Rae: "The moment you divide it up and you just try to analyze the human behavior or analyze the automation, you lose the understanding of where the safety is coming from and what's necessary for it to be safe."

David Provan: "We actually don't think about that automation in the context of the overall system and all of the interfaces and everything like that. So we, we look at AI as AI and, you know, deploying. Introducing ai, but we don't do any kind of comprehensive analysis of, you know, what's gonna be all of the flow on implications and interfaces and potentially unintended consequences or the system, not necessarily just the technology or automation itself."

Drew Rae: People are going to have reactions. And those reactions are gonna have a big impact on their willingness for you to do this in the first place...You can't just force it onto them… All the things that you're trying to improve might actually get worse because of the monitoring.

David Provan: "But I think this paper makes a really good argument, which is actually our automated system should be far more flexible than that. So I might be able to adjust, you know, it's functioning. If I know, if I, if I know enough about how it's functioning and why it's functioning, and I realize that the automation can't understand context and situation, then I should be able to make adjustments."

Drew Rae: Most people don't mind if their car is giving them feedback on their driving, but most people don't like it if their car is phoning home to your boss, giving information about your driving.

Resources:

Intelligent automated systems to support human teamwork: perceived invasiveness impacts team members’ psychological reactions negatively

  continue reading

133 episodes

Artwork
iconShare
 
Manage episode 506495039 series 2571262
Content provided by David Provan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by David Provan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

The findings reveal that invasiveness drives negative reactions more than the stated purpose of monitoring, with participants showing skepticism about AI's ability to accurately measure teamwork quality. The hosts emphasize that even well-intentioned monitoring systems introduce psychosocial hazards and stress, requiring organizations to carefully balance potential benefits against worker well-being impacts when implementing AI-powered team support systems.

Discussion Points:

  • (00:00) AI team monitoring invasiveness, AI workplace surveillance
  • (03:16) Research paper introduction and authors from Germany
  • (04:48) Previous research on electronic monitoring and employee reactions
  • (08:49) Study one methodology with invasiveness levels and video scenarios
  • (12:12) Study two simulation with teams designing fitness trackers
  • (17:46) Study one findings on invasiveness and fairness perceptions
  • (24:07) Study two results and participant skepticism about monitoring accuracy
  • (32:48) Practical takeaways on invasiveness levels and stakeholder management
  • (40:42) Final recommendations on balancing benefits and invasiveness
  • Like and follow, send us your comments and suggestions for future show topics!

Quotes:

Drew Rae: "The moment you divide it up and you just try to analyze the human behavior or analyze the automation, you lose the understanding of where the safety is coming from and what's necessary for it to be safe."

David Provan: "We actually don't think about that automation in the context of the overall system and all of the interfaces and everything like that. So we, we look at AI as AI and, you know, deploying. Introducing ai, but we don't do any kind of comprehensive analysis of, you know, what's gonna be all of the flow on implications and interfaces and potentially unintended consequences or the system, not necessarily just the technology or automation itself."

Drew Rae: People are going to have reactions. And those reactions are gonna have a big impact on their willingness for you to do this in the first place...You can't just force it onto them… All the things that you're trying to improve might actually get worse because of the monitoring.

David Provan: "But I think this paper makes a really good argument, which is actually our automated system should be far more flexible than that. So I might be able to adjust, you know, it's functioning. If I know, if I, if I know enough about how it's functioning and why it's functioning, and I realize that the automation can't understand context and situation, then I should be able to make adjustments."

Drew Rae: Most people don't mind if their car is giving them feedback on their driving, but most people don't like it if their car is phoning home to your boss, giving information about your driving.

Resources:

Intelligent automated systems to support human teamwork: perceived invasiveness impacts team members’ psychological reactions negatively

  continue reading

133 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play