Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Stephen Auger Podcasts

show episodes
 
Artwork

1
Gen X Amplified with Adrion Porter

Adrion Porter | Founder, Mid-Career Mastery

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
Gen X Amplified is the premier podcast dedicated to the powerful generation between the boomers and millennials. The podcast features valuable insights and conversations with Gen X leaders, professionals, and entrepreneurs including: Jon Fortt, Lindsey Pollak, Tara Jaye Frank, Carmine Gallo, Ada Calhoun, Melissa Proctor, and many others.
  continue reading
 
Loading …
show series
 
Some AIs learn by becoming expert judges, calculating a score for every possible clinical decision before making a move. We explain value-based methods, the 'AI Critic,' and why they excel at multiple-choice medicine but falter when the decisions are infinitely complex. #HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #DeepLe…
  continue reading
 
Epic recently unveiled Comet, a new AI model trained on 118 million patient records to predict future health events. The scale is unprecedented, and its initial ability to outperform specialised models is a huge leap forward for clinical AI. But what is it really learning from our messy, real-world data? In this today's episode, we break down why C…
  continue reading
 
You can't teach an AI complex medicine by throwing it in at the deep end. Curriculum learning applies the principles of medical school to AI, training models on simple tasks before moving to complex ones. Find out why this matters for building safe and effective clinical AI. #HealthAI #DigitalHealth #ArtificialIntelligence #MachineLearning #Curricu…
  continue reading
 
An AI, like a clinician, faces a constant choice: stick with the proven treatment or explore a novel approach? In this episode, we break down the 'exploration-exploitation' dilemma, a core concept in AI that has major implications for how we design and trust medical AI systems. #HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning…
  continue reading
 
Paper: Scaling Large Language Models for Next-Generation Single-Cell Analysis by Rizvi et al Paper link: https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2 This week we're covering recent research presenting C2S-Scale, a new model from researchers at Yale and Google that teaches Large Language Models the "language of the cell." By translat…
  continue reading
 
How does an AI like ChatGPT learn to be so helpful? The answer is "Reinforcement Learning," a powerful method of learning through trial-and-error, rewards, and punishments. In this special extended episode, we break down how reinforcement learning works and explain RLHF, the key technique used to train the language models that are transforming our …
  continue reading
 
We are rolling out powerful AI tools in hospitals and clinics at a breathtaking pace. But are they helping, or are they causing harm? A new report from the JAMA Summit on Artificial Intelligence reveals a key gap in our ability to answer that question. Featuring a stark warning from former FDA Commissioner Robert Califf, this episode breaks down wh…
  continue reading
 
Why build an AI model from scratch when you can give it a head start? And why waste expert time on easy cases? This episode explores two powerful strategies for efficient AI development. Discover how Transfer Learning gives your model a foundation of pre-existing knowledge and how Active Learning creates a smart feedback loop where the AI asks for …
  continue reading
 
How can an AI learn to read a medical scan without a perfect, expert-labeled dataset? In the real world, data is messy. This episode dives into three ingenious techniques (semi-supervised, self-supervised, and weak supervision) that allow AI to learn from a little bit of expert guidance, teach itself from unlabeled data, or make sense of noisy, imp…
  continue reading
 
The New England Journal of Medicine just featured an AI, 'Dr. CaBot,' as a guest expert in its legendary diagnostic challenge. This AI can not only find the right diagnosis but can reason and tell a compelling clinical story, sometimes more convincingly than human doctors. But does this mean Dr. AI is ready for the ward? We explore the gap between …
  continue reading
 
What if an AI could find patterns in patient data that we've never seen before? That's the power of "unsupervised learning", a type of AI that learns without an answer key. In this episode, we explain how this method works, and why it's a powerful tool for discovering new patient subtypes and advancing personalised medicine. #UnsupervisedLearning #…
  continue reading
 
Top AI models are acing medical benchmarks, but are they actually ready for the clinic? A groundbreaking study reveals that impressive scores can hide a dangerous lack of real-world robustness. In this episode, we break down the ingenious "stress tests" that expose how AI can succeed on an exam for all the wrong reasons—from guessing answers withou…
  continue reading
 
An AI model doesn't just learn on its own; it follows a protocol. The settings of that protocol, like the "learning rate", are called hyperparameters. In this episode, we explain what these crucial settings are, why they are the 'art' of AI development, and how they help you judge the quality of a research paper. #Hyperparameters #AIinHealthcare #M…
  continue reading
 
Imagine trying to find the lowest point in a valley while blindfolded. How would you do it? The same way an AI finds the best answer: one step at a time, always moving downhill. This process is called "gradient descent," and it's one of the engines that powers machine learning. In this episode, we explain how it works, what the "learning rate" is, …
  continue reading
 
The UK has just launched a star-studded National Commission to rewrite the rulebook for AI in the NHS. The goal: faster, safer innovation for patients. It could be a powerful accelerator and will hopefully avoid the pull of becoming another talking shop lost in bureaucracy. #HealthAI #AIinHealthcare #DigitalHealth #NHS #HealthTech #Regulation #MHRA…
  continue reading
 
How does an AI model quantify a mistake? It uses a "loss function" – a scorecard that penalises different types of errors. In this episode, we explain what a loss function is, why it's not a one-size-fits-all tool, and how it reveals the true clinical priorities of any AI model. A crucial concept for critically appraising new research. #LossFunctio…
  continue reading
 
How does an AI model actually learn to spot disease on a scan? It all comes down to one fundamental goal: minimising error. In this episode, we kick off a new set of episodes on the mechanics of machine learning by explaining this core principle with a simple clinical analogy that will change how you look at AI. Understanding this is the first step…
  continue reading
 
Shmatko, A., Jung, A.W., Gaurav, K. et al. Learning the natural history of human disease with generative transformers. Nature (2025). Link to paper: https://www.nature.com/articles/s41586-025-09529-3 What if an AI could forecast your health like the weather? A groundbreaking new model called Delphi-2M, published in Nature, claims to do just that — …
  continue reading
 
We break down a landmark UCL study on the NHS's £21m programme to deploy AI in chest diagnostics. They uncover the real reasons for significant delays, moving beyond the technology to the critical, real-world barriers: staff capacity, fragmented IT infrastructure, and complex governance. Find out why dedicated project management is the secret to su…
  continue reading
 
Ever been handed a patient's CT scan on a CD-ROM and wondered why medical systems struggle to communicate? The problem is that they need to speak the same language. This episode decodes the three essential standards of medical data exchange. We break down DICOM (the "courier package" for images), HL7 (the "digital fax machine" for classic hospital …
  continue reading
 
How do we know if a medical AI has truly learned to spot disease, or just memorised the answers to its practice questions? The same way we evaluate a trainee: with a final, unseen exam. This crucial process involves splitting data into three sets: training data (the textbook), validation data (the mock exam), and test data (the final exam). In this…
  continue reading
 
How do you teach an AI to read a chest X-ray? The same way a consultant teaches a resident doctor on a ward round: you point, you trace, and you provide the correct answer. This is data annotation, the meticulous, human-led process of "teaching" an algorithm by labelling thousands of examples. In this episode of The Health AI Brief, we explain why …
  continue reading
 
The headlines were everywhere: a revolutionary AI stethoscope that could more than double the detection of heart failure in GP clinics. The reported results from the TRICORDER trial sound transformative. But what happens when you look beyond the press release? Was it truly the AI that improved diagnosis, or did the trial simply prompt more testing?…
  continue reading
 
A patient's record is a chaotic mix of notes, lab test results, and codes. We can navigate the mess, but how can an AI? The answer lies in data cleaning and preprocessing – the most critical, yet unglamorous, step in building medical AI. This episode of The Health AI Brief explains why this process is like meticulously preparing ingredients for a c…
  continue reading
 
A critically high potassium result arrives for a patient who looks completely well. Your first instinct isn't to treat, but to question the sample. Should we be just as sceptical of the data behind medical AI? This episode of The Health AI Brief dives into the most fundamental rule of artificial intelligence: junk in, junk out. Dr. Stephen uses the…
  continue reading
 
Some estimates indicate up to 80% of clinical data is "unstructured" narrative. It’s messy, complex, and where the real patient story lives. This episode explains how AI is finally unlocking this treasure trove of information and what it means for your daily practice. #HealthTech #ArtificialIntelligence #ClinicalPractice #MedicalInnovation #EHR #Pa…
  continue reading
 
Link to the preprint discussed: https://arxiv.org/pdf/2505.10251 Link to the project with explanations: https://h-surgical-robot-transformer.github.io/ A surgical robot that corrects its own mistakes sounds like science fiction. In this paper, new research from Johns Hopkins & Stanford makes it a reality. But is it ready for the operating room? The…
  continue reading
 
AI in medicine has reached a clear tipping point. But what are the specific factors driving this rapid progress? This episode breaks down the three essential pillars: the explosion in clinical data, massive leaps in computation, and recent, powerful breakthroughs in algorithms. We explore how mature algorithms from outside of medicine, particularly…
  continue reading
 
A new study from The Lancet that has sent a ripple of anxiety through the clinical AI community. The paper suggests that AI tools designed to help doctors may actually cause their skills to decline over time. But is the evidence as solid as the headlines suggest? Is AI dependency a real threat to patient safety? #HealthAI #ArtificialIntelligence #C…
  continue reading
 
AI that can do, not just tell. We explore AI Agents: systems that go beyond diagnosis to take action. This leap forward promises to tackle our admin overload but brings a new level of clinical risk. Are we ready? Music generated by Mubert https://mubert.com/render AI Agents, Healthcare AI, Clinical Safety, Physician Burnout, Automation, Patient Saf…
  continue reading
 
We see AI that can read an ECG, but we also hear headlines about a future superintelligence. How do these two realities connect? In this episode, we provide an essential reality check. We break down the crucial difference between the AI we have in our clinics today (Narrow AI) and the AI of science fiction (AGI & ASI). Understanding this spectrum i…
  continue reading
 
You’ve seen the headlines about OpenAI’s new model, but much of the coverage is confusing 'open-weights' with 'open-source'. They are not the same, and the distinction is relevant for patient data security and clinical trust. In this episode of The Health AI Brief, we decode some of the jargon. Learn: - The fundamental difference between open-weigh…
  continue reading
 
You hear about new AI models having "billions of parameters." It sounds impossibly complex, but the core idea is surprisingly simple and it's the single most important concept for understanding how an AI actually works. These parameters determine an AI's capabilities, its limitations, and its potential for bias. In this episode of The Health AI Bri…
  continue reading
 
How can an AI analyse a CT scan or pathology slide with expert-level accuracy? The answer is Deep Learning—the engine behind the revolution in medical imaging. In this episode, we explain how these 'deep' neural networks teach themselves to see complex patterns, much like our own visual cortex processes information from simple edges to complex obje…
  continue reading
 
Today we're discussing AI-powered physiotherapy app from Flok Health which has seen widespread media coverage and has gained both CQC and MHRA approval, promising to slash waiting lists for back pain. The goal is compelling: automate care for straightforward cases to free up human clinicians for complex ones. But what does the evidence really say? …
  continue reading
 
When we say a machine "learns" from data, what's actually happening? What is the engine doing? In this episode, we break down the fundamental building block of modern AI: the Neural Network. We explain it as a logical chain for synthesising information—from raw data like ECG signals in the 'input layer', to abstract concepts like 'high-risk' in the…
  continue reading
 
In our last episode, we described Machine Learning as the engine that powers AI. So, what is the specific output of that engine? What is created when an AI is "trained"? The answer is the "model." This is the end product of the training process—the distilled knowledge, captured in digital form. In this episode, we explain what an AI model is and wh…
  continue reading
 
A Deep Dive into the OpenAI & Penda Health AI Study In this episode, we provides a critical analysis of the highly publicised paper, "AI-based Clinical Decision Support for Primary Care: A Real-World Study." We go beyond the abstract's impressive claims to dissect the real-world implications: - The Design: Acknowledging the brilliant workflow integ…
  continue reading
 
In the last episode we called AI the "toolbox." Now, we're looking at the most powerful tool inside: Machine Learning (ML). So, what’s the real difference? Think of it this way: AI is the destination, but Machine Learning is the engine getting us there. In this episode, we break down the simple but powerful "Train, Detect, Predict" framework. This …
  continue reading
 
AI is "better than a radiologist." AI is writing your clinic notes. The term 'AI' is everywhere in medicine, but what does it really mean? It's easy to feel like it's just a confusing buzzword. In Part 1 of our new series, we cut through the hype to give you a clear, simple definition. Think of AI as the broad toolbox aiming to mimic human intellig…
  continue reading
 
On this very special episode of Gen X Amplified, I am chatting with writer, 2x creative entrepreneur, and former multi-agency brand strategist Chauncey Zalkin. As a podcaster, Chauncey is also the host of the groundbreaking podcast Actual People, which is an unfiltered exploration of individual and societal shifts in a world undergoing tremendous c…
  continue reading
 
For this very special episode of Gen X Amplified, I have the pleasure of being joined by Ebony Flake, Business Journalist for ESSENCE. What makes this conversation extra amazing is that Ebony graciously featured me in a story she wrote last year as a full-page profile for their annual Men's print Issue. The story was titled "Gen X Leaders Are Domin…
  continue reading
 
On this episode of Gen X Amplified, I am joined by Stephen Bailey, Stephen is the CEO and co-founder of ExecOnline, a corporate training company that partners with the world's top business schools to offer leadership development on demand to enterprise organizations. ExecOnline is on a mission to connect all leaders to their future potential by dis…
  continue reading
 
On this episode of Gen X Amplified, I am speaking with global leader, speaker, workplace strategist Daisy Auger-Domínguez, who is the author of the book "Burnt Out to Lit Up: How to Reignite the Joy of Leading People." With years of experience leading global human capital practices at companies like Google, Disney and Vice Media, she equips manager…
  continue reading
 
On this episode of Gen X Amplified, I am welcoming back serial entrepreneur Ian Schafer, repeat guest and friend of the podcast, for another round of "Gen Xceptional" wisdom and inspiration. Ian is the President and Co-Founder of Ensemble, a next-generation branded entertainment studio that he co-founded with acclaimed producer, writer, actress, an…
  continue reading
 
On this episode of Gen X Amplified, we have a very special episode featuring an amazing "Gen Xceptional" chat with Denise Hamilton. Denise is a nationally recognized Diversity & Inclusion leader, and is the Founder and CEO of WatchHerWork, a digital learning platform for professional women. She is also the author of the bestselling book "Indivisibl…
  continue reading
 
On this episode of Gen X Amplified, we are back for another #FabulousOver40 Fireside Chat edition, featuring my conversation with Cate Luzio —who is the Founder and CEO of Luminary, a membership-based global education and professional networking platform for women and their allies. And Cate is also one of the shining stars I previously featured in …
  continue reading
 
On this episode of Gen X Amplified, we have yes…another #FabulousOver40 Fireside Chat edition. And it's with my good friend, the fabulous and phenomenal Tara Jaye Frank. And we are going to be unpacking her latest and greatest book The Waymakers: Clearing the Path to Workplace Equity with Competence and Confidence. Which in my opinion Is a MUST REA…
  continue reading
 
Loading …
Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play