Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Frontier Centre Podcasts

show episodes
 
Artwork

1
AI Safety Newsletter

Center for AI Safety

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic p ...
  continue reading
 
Artwork

1
Clinical Changemakers

Inspiring Clinicians to Thrive

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
Clinicians have trained in the art and science of medicine, and yet feel powerless to make a meaningful impact on the healthcare system. Clinical Changemakers is the podcast looking to bridge this gap by exploring inspiring stories of leadership, innovation and so much more. To learn more and join the conversation, visit: www.clinicalchangemakers.com www.clinicalchangemakers.com
  continue reading
 
Loading …
show series
 
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss President Trump's executive order …
  continue reading
 
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss the new AI Dashboard, recent front…
  continue reading
 
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium. Listen to the AI Safety Newsletter for fr…
  continue reading
 
In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI. As a reminder, we’re hiring a writer for the newsletter. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Bill Would Establish Liabili…
  continue reading
 
"AI has the potential to re-ontologize healthcare—to completely redesign what we consider to be a disease, what we consider to be a disability, and how we organise care. But we need to decide what good healthcare actually means before we AI-ify everything." — Dr Jessica Morley In this episode of Clinical Changemakers, Dr Jessica Morley, an AI ethic…
  continue reading
 
In this edition: California's legislature sent SB-53—the ‘Transparency in Frontier Artificial Intelligence Act’—to Governor Newsom's desk. If signed into law, California would become the first US state to regulate catastrophic risk. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. A note from Corin: I’m leaving the AI Safet…
  continue reading
 
"We want a two-way flow of communication so that we have a better understanding through the levels of an organisation up through those levels of what's actually happening and we can make decisions closer to the ground." — Dr. Sharen Paine In this episode of Clinical Changemakers, Dr. Sharen Paine, a systems thinking expert with a doctorate in busin…
  continue reading
 
Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases. In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secret…
  continue reading
 
"I think judgment, I've been honing in on that word more frequently recently because I feel like the judgment piece is the piece that feels particularly like human in this decision." — Dr Graham Walker In this episode of Clinical Changemakers, Dr Graham Walker, an ER doctor and AI healthcare leader, discusses his role at Kaiser Permanente and the c…
  continue reading
 
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: OpenAI releases GPT-5. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Releases GPT-5 Ever since GPT-4's release in March 2023 marked a step-change improvem…
  continue reading
 
Also: ChatGPT Agent and IMO Gold. In this edition: The Trump Administration publishes its AI Action Plan; OpenAI released ChatGPT Agent and announced that an experimental model achieved gold medal-level performance on the 2025 International Mathematical Olympiad. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The AI Actio…
  continue reading
 
Plus: Meta Superintelligence Labs. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts…
  continue reading
 
Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use. In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use. Listen to the AI Safety Newsletter for free on Spotify…
  continue reading
 
“You know, what if they were to actually put it’s [AI] mind to a science of practical compassion for everybody?… if the right machines were to come along and help us do it, that's going to be a fabulous thing.” Dr Richard Lehman is a retired GP from Oxfordshire who had a "ringside seat" to the birth of evidence-based medicine, previously held acade…
  continue reading
 
"When the frontline feels we're actually offering, 'what do you need? What are the resources we can help?' We'll co-create. Of course, they don't have the control of the resources or some decisions, but that's what executives can do." Dr. Raj Srivastava is a pediatrician, health system leader, and implementation science researcher, serving as Chief…
  continue reading
 
In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The RAISE Act New York may soon become the first state to regulate frontier AI systems. On June 12, the state's legislature passed the Responsible A…
  continue reading
 
"At the end of the day, we don't really invest as much into products as we do into people. It's the people behind the products that are going to make the product successful" Dr. Amandeep Hansra is a general practitioner turned health tech entrepreneur, advisor, and investor, founder of the Creative Careers in Medicine community (with over 25,000 me…
  continue reading
 
"The enterprise of medicine has both scientific and moral dimensions, and they're inextricably balanced" Dr Vikas Saini is a Cardiologist and President of the Lown Institute, where he leads a non-partisan think tank advocating bold ideas for a just and caring system for health. With a unique background combining philosophy and medicine, Dr. Saini h…
  continue reading
 
Plus, Opus 4 Demonstrates the Fragility of Voluntary Governance. In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic's Claude Opus 4 demonstrates the danger of relying on voluntary governance. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Google Releases Veo 3 …
  continue reading
 
"The way to prevent cascades is to keep your eye on the goal, to understand what you're trying to accomplish and not to go down rabbit holes chasing abnormalities, but rather to be focused on the outcome that you're looking for." Dr James W. Mold is a family medicine physician, geriatrician, researcher and academic with a Master of Public Health de…
  continue reading
 
"We need to pay more attention to the networks that operate between people and the networks that operate between organisations.” Professor Ingrid Nembhard is an Organisational Behaviour expert in healthcare systems, based at the Wharton School of the University of Pennsylvania. Her research focuses on how characteristics of health care organisation…
  continue reading
 
Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption. In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from reg…
  continue reading
 
Plus, AI Safety Collaboration in Singapore. In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Updates Restructure Plan On May 5th, OpenAI announ…
  continue reading
 
"Epic charges researchers to access the data that the same Institute has spent thousands of hours storing. They're not a good custodian of data... they're literally locking your data away and holding it for ransom." — Dr. Sidharth Ramesh Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr Sidharth Ramesh is a medical doctor,…
  continue reading
 
Plus, SafeBench Winners. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: Experts and ex-employees urge the Attorneys General of California and Delaware to block OpenAI's for-profit restructure; CAIS announces the winners of its safety be…
  continue reading
 
“The market was flooded with all these new tools and technologies that people were using, but no real evidence base behind them. Now we’re in this space where we need evidence. — Dr Saira Ghafur Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr Saira Ghafur is a medical doctor specialising in respiratory medicine, an acade…
  continue reading
 
Plus, AI-Enabled Coups. In this edition: AI now outperforms human experts in specialized virology knowledge in a new benchmark; A new report explores the risk of AI-enabled coups. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. An Expert Virology Benchmark A team of researchers (primarily from SecureBio and CAIS) has devel…
  continue reading
 
Plus, AI 2027. In this newsletter, we cover the launch of AI Frontiers, a new forum for expert commentary on the future of AI. We also discuss AI 2027, a detailed scenario describing how artificial superintelligence might emerge in just a few years. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. AI Frontiers Last week, CA…
  continue reading
 
"I think of quality as what is the patient's ‘job to be done’. What are they trying to get out of this encounter or from their help? What are they trying to achieve? Making quality about that." — Dr Raj Behal. Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr. Raj Behal is a medical doctor and Chief Quality Officer at Amaz…
  continue reading
 
Plus, Detecting Misbehavior in Reasoning Models. In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought. Listen to the AI Safety Newsletter for …
  continue reading
 
Plus, Detecting Misbehavior in Reasoning Models. In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought. Listen to the AI Safety Newsletter for …
  continue reading
 
"The greatest legacy we can leave as this generation of leaders is to make meaningful progress in narrowing those health inequalities between communities." — Prof Bola Owolabi. Episode Overview In this episode, Professor Bola Owolabi, GP and National Director for Healthcare Inequalities at NHS England, shares her insights on health inequalities in …
  continue reading
 
"The health impact of corporate power isn't just about products—it's about how commercial interests shape the entire landscape of health policy and research." — Dr Nason Maani Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr Nason Maani, lecturer in Inequalities and Global Health Policy at the University of Edinburgh, aut…
  continue reading
 
Plus, Measuring AI Honesty. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this newsletter, we discuss two recent papers: a policy paper on national security strategy, and a technical paper on measuring honesty in AI systems. Listen to the AI Safety …
  continue reading
 
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a s…
  continue reading
 
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a s…
  continue reading
 
"I'd rather have people understand why they should believe something, not just that they should." — Dr Aaron Carroll Listen now on Apple, Spotify, YouTube and or wherever you get your podcasts. Dr Aaron Carroll, pediatrician, professor, president and CEO of Academy Health, and renowned science communicator, discusses the art and science of effectiv…
  continue reading
 
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. In this newsletter, we explore two recent papers from CAIS. We’d also like to highlight that CAIS is hiring for editorial and writin…
  continue reading
 
Plus, State-Sponsored AI Cyberattacks. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Reasoning Models DeepSeek-R1 has been one of the most significant model releases since ChatGPT. After its release, the DeepSeek's app quickly rose to the top of Apple's most downloaded chart and NVIDIA saw a 17% stock decline. In this st…
  continue reading
 
Plus, Humanity's Last Exam, and the AI Safety, Ethics, and Society Course. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Transition The transition from the Biden to Trump administrations saw a flurry of executive activity on AI policy, with Biden signing several last-minute executive orders and Trump revoking Biden's…
  continue reading
 
As 2024 draws to a close, we want to thank you for your continued support for AI safety and review what we’ve been able to accomplish. In this special-edition newsletter, we highlight some of our most important projects from the year. The mission of the Center for AI Safety is to reduce societal-scale risks from AI. We focus on three pillars of wor…
  continue reading
 
Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Trump Circle on AI Safety The incoming Trump administration is likely to significantly alter the US gover…
  continue reading
 
Plus, AI and Job Displacement, and AI Takes Over the Nobels. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. White House Issues First National Security Memo on AI On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Gov…
  continue reading
 
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 …
  continue reading
 
Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—…
  continue reading
 
Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has g…
  continue reading
 
Plus, Safety Engineering Overview. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Implications of a Trump administration for AI policy Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this…
  continue reading
 
Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Supreme Court Decision Could Limit Federal Ability to Regulate AI In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we …
  continue reading
 
US Launches Antitrust Investigations The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation …
  continue reading
 
Voluntary Commitments are Insufficient AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments. Some commit…
  continue reading
 
Loading …
Copyright 2026 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play