Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Reg/Tech Lab - HKU-SCF FinTech Academy - Asia Global Institute - HKU-edX Professional Certificate in FinTech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Reg/Tech Lab - HKU-SCF FinTech Academy - Asia Global Institute - HKU-edX Professional Certificate in FinTech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ep 69 - Human Intelligence vs. Machine Judgment

59:42
 
Share
 

Manage episode 482734763 series 3468561
Content provided by Reg/Tech Lab - HKU-SCF FinTech Academy - Asia Global Institute - HKU-edX Professional Certificate in FinTech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Reg/Tech Lab - HKU-SCF FinTech Academy - Asia Global Institute - HKU-edX Professional Certificate in FinTech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Episode #69 with Nigel Morris-Cotterill and Patrick Dransfield 🎧

In this two-part episode of Regulatory Ramblings, host Ajay Shamdasani is joined by two seasoned professionals who examine artificial intelligence from very different, yet deeply complementary angles: cultural, philosophical, and ethical on one hand; legal, compliance, and technical on the other. The result is a wide-ranging, thought-provoking conversation about the role of human intelligence in an increasingly automated world—and the dangers of outsourcing critical decisions to machines.

In the first segment, Patrick Dransfield—a legal marketing expert, author, and co-founder of the Managing Partners Club—discusses his essay Watched Over by Machines of Loving Grace, a title borrowed from a Richard Brautigan poem. Patrick, who holds a master’s degree in Chinese history, politics, and anthropology from SOAS (University of London) and a joint honours degree in English and History of Art from the University of Leeds, invites listeners to consider not only what AI is, but what it means to be human in a time of rapid technological change. Drawing on cultural history, classical Chinese philosophy, and his own professional observations, he contrasts Eastern and Western perspectives on the self, society, and intelligence. He explores the fundamental importance of human skills—such as relationship-building and generosity—in legal practice and business development, and how AI cannot replicate or replace these core human capacities.

Patrick argues that while the West often approaches AI with a moral and even quasi-religious fear of transgression—concerned with issues like sentience and ethical boundaries—China’s philosophical traditions tend to frame AI as a pragmatic tool, leading to more open development approaches such as open-source platforms like DeepSeek. He also critiques the prevailing “billable hour” model in law, suggesting that younger professionals will struggle most as automation reshapes entry-level tasks. Ultimately, Patrick makes a strong case for reviving and redefining human intelligence as the foundation upon which any meaningful use of AI must be built.

In the second segment, Nigel Morris-Cotterill—a veteran solicitor turned financial crime and compliance expert—discusses his provocative article, Computers Are Mechanized Psychopaths. He explains why this title is not just attention-grabbing, but literally accurate: computers, by their very architecture, lack empathy, nuance, and the capacity for moral reasoning. Yet society is increasingly empowering them to make life-altering decisions—about financial transactions, legal violations, online speech, and more.

Nigel warns against the blind trust placed in algorithms, which are often built by developers with limited contextual awareness or cultural sensitivity. He critiques the myth of “machine learning,” arguing that what’s being sold as intelligence is often just a large-scale execution of yes/no decision trees. He shares examples of how poorly applied compliance systems can lead to innocent people being debanked or flagged as suspicious based on flawed logic—without human intervention to correct these mistakes. His call to action is clear: AI should never be allowed to make unreviewed, consequential decisions about people’s lives.

Together, these two interviews offer a sobering but insightful view into the current state of AI and its intersection with law, culture, and ethics. While Dransfield emphasizes the need to understand ourselves before we build better machines, Morris-Cotterrill reminds us that those machines—no matter how sophisticated—must always remain subordinate to human judgment.

HKU FinTech is the leading fintech research and education in Asia. Learn more at www.hkufintech.com.

  continue reading

Chapters

1. Ep 69 - Human Intelligence vs. Machine Judgment (00:00:00)

2. Patrick Dransfield on AI, Culture, and Human Intelligence (00:01:45)

3. Why Write About AI: Ancient Wisdom Meets Digital Futures (00:03:04)

4. Reskilling and Human Relevance in an AI Age (00:06:05)

5. What the West Gets Wrong About Intelligence (00:09:53)

6. Will AI Break the Law Firm Business Model? (00:14:23)

7. Recapping the Central Thesis (00:16:44)

8. The Self in the Age of AI: East-West Contrasts (00:17:59)

9. Why China Isn’t Panicking About AI (00:20:34)

10. Nigel Morris-Cotterrill on Mechanized Psychopaths and AI Fallacies (00:24:49)

11. Why AI Systems Are “Mechanized Psychopaths” (00:26:32)

12. Computers Can’t Empathize—And That’s the Problem (00:27:33)

13. Woke Code and Binary Judgment (00:30:07)

14. Algorithmic Tyranny: When AI Flags Idiots and Meatballs as Hate (00:33:15)

15. Financial Systems, Suspicion, and Machine Error (00:43:46)

16. Empathy, Algorithms, and the Limits of Logic (00:46:34)

17. Machine Learning, Illusions, and Systemic Risk (00:51:59)

18. When Code Misjudges: From Faulty Flags to Real-World Harm (00:56:30)

19. False Narratives, Real Consequences (00:58:03)

78 episodes

Artwork
iconShare
 
Manage episode 482734763 series 3468561
Content provided by Reg/Tech Lab - HKU-SCF FinTech Academy - Asia Global Institute - HKU-edX Professional Certificate in FinTech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Reg/Tech Lab - HKU-SCF FinTech Academy - Asia Global Institute - HKU-edX Professional Certificate in FinTech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Episode #69 with Nigel Morris-Cotterill and Patrick Dransfield 🎧

In this two-part episode of Regulatory Ramblings, host Ajay Shamdasani is joined by two seasoned professionals who examine artificial intelligence from very different, yet deeply complementary angles: cultural, philosophical, and ethical on one hand; legal, compliance, and technical on the other. The result is a wide-ranging, thought-provoking conversation about the role of human intelligence in an increasingly automated world—and the dangers of outsourcing critical decisions to machines.

In the first segment, Patrick Dransfield—a legal marketing expert, author, and co-founder of the Managing Partners Club—discusses his essay Watched Over by Machines of Loving Grace, a title borrowed from a Richard Brautigan poem. Patrick, who holds a master’s degree in Chinese history, politics, and anthropology from SOAS (University of London) and a joint honours degree in English and History of Art from the University of Leeds, invites listeners to consider not only what AI is, but what it means to be human in a time of rapid technological change. Drawing on cultural history, classical Chinese philosophy, and his own professional observations, he contrasts Eastern and Western perspectives on the self, society, and intelligence. He explores the fundamental importance of human skills—such as relationship-building and generosity—in legal practice and business development, and how AI cannot replicate or replace these core human capacities.

Patrick argues that while the West often approaches AI with a moral and even quasi-religious fear of transgression—concerned with issues like sentience and ethical boundaries—China’s philosophical traditions tend to frame AI as a pragmatic tool, leading to more open development approaches such as open-source platforms like DeepSeek. He also critiques the prevailing “billable hour” model in law, suggesting that younger professionals will struggle most as automation reshapes entry-level tasks. Ultimately, Patrick makes a strong case for reviving and redefining human intelligence as the foundation upon which any meaningful use of AI must be built.

In the second segment, Nigel Morris-Cotterill—a veteran solicitor turned financial crime and compliance expert—discusses his provocative article, Computers Are Mechanized Psychopaths. He explains why this title is not just attention-grabbing, but literally accurate: computers, by their very architecture, lack empathy, nuance, and the capacity for moral reasoning. Yet society is increasingly empowering them to make life-altering decisions—about financial transactions, legal violations, online speech, and more.

Nigel warns against the blind trust placed in algorithms, which are often built by developers with limited contextual awareness or cultural sensitivity. He critiques the myth of “machine learning,” arguing that what’s being sold as intelligence is often just a large-scale execution of yes/no decision trees. He shares examples of how poorly applied compliance systems can lead to innocent people being debanked or flagged as suspicious based on flawed logic—without human intervention to correct these mistakes. His call to action is clear: AI should never be allowed to make unreviewed, consequential decisions about people’s lives.

Together, these two interviews offer a sobering but insightful view into the current state of AI and its intersection with law, culture, and ethics. While Dransfield emphasizes the need to understand ourselves before we build better machines, Morris-Cotterrill reminds us that those machines—no matter how sophisticated—must always remain subordinate to human judgment.

HKU FinTech is the leading fintech research and education in Asia. Learn more at www.hkufintech.com.

  continue reading

Chapters

1. Ep 69 - Human Intelligence vs. Machine Judgment (00:00:00)

2. Patrick Dransfield on AI, Culture, and Human Intelligence (00:01:45)

3. Why Write About AI: Ancient Wisdom Meets Digital Futures (00:03:04)

4. Reskilling and Human Relevance in an AI Age (00:06:05)

5. What the West Gets Wrong About Intelligence (00:09:53)

6. Will AI Break the Law Firm Business Model? (00:14:23)

7. Recapping the Central Thesis (00:16:44)

8. The Self in the Age of AI: East-West Contrasts (00:17:59)

9. Why China Isn’t Panicking About AI (00:20:34)

10. Nigel Morris-Cotterrill on Mechanized Psychopaths and AI Fallacies (00:24:49)

11. Why AI Systems Are “Mechanized Psychopaths” (00:26:32)

12. Computers Can’t Empathize—And That’s the Problem (00:27:33)

13. Woke Code and Binary Judgment (00:30:07)

14. Algorithmic Tyranny: When AI Flags Idiots and Meatballs as Hate (00:33:15)

15. Financial Systems, Suspicion, and Machine Error (00:43:46)

16. Empathy, Algorithms, and the Limits of Logic (00:46:34)

17. Machine Learning, Illusions, and Systemic Risk (00:51:59)

18. When Code Misjudges: From Faulty Flags to Real-World Harm (00:56:30)

19. False Narratives, Real Consequences (00:58:03)

78 episodes

Tutti gli episodi

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play