The Frontier Centre is an independent Canadian think tank that conducts research to develop effective and meaningful ideas for public policy reform.
…
continue reading
Frontier Centre Podcasts
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic p ...
…
continue reading
Clinicians have trained in the art and science of medicine, and yet feel powerless to make a meaningful impact on the healthcare system. Clinical Changemakers is the podcast looking to bridge this gap by exploring inspiring stories of leadership, innovation and so much more. To learn more and join the conversation, visit: www.clinicalchangemakers.com www.clinicalchangemakers.com
…
continue reading
1
AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek
11:38
11:38
Play later
Play later
Lists
Like
Liked
11:38Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss President Trump's executive order …
…
continue reading
1
AISN #66: AISN #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back
12:27
12:27
Play later
Play later
Lists
Like
Liked
12:27Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss the new AI Dashboard, recent front…
…
continue reading
1
AISN #65: Measuring Automation and Superintelligence Moratorium Letter
6:29
6:29
Play later
Play later
Lists
Like
Liked
6:29Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium. Listen to the AI Safety Newsletter for fr…
…
continue reading
1
AISN #63: New AGI Definition and Senate Bill Would Establish Liability for AI Harms
10:52
10:52
Play later
Play later
Lists
Like
Liked
10:52In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI. As a reminder, we’re hiring a writer for the newsletter. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Bill Would Establish Liabili…
…
continue reading
1
The Ethics of AI in Healthcare: Beyond the Stochastic Parrot | Dr. Jessica Morley (Yale Digital Ethics Centre)
1:03:14
1:03:14
Play later
Play later
Lists
Like
Liked
1:03:14"AI has the potential to re-ontologize healthcare—to completely redesign what we consider to be a disease, what we consider to be a disability, and how we organise care. But we need to decide what good healthcare actually means before we AI-ify everything." — Dr Jessica Morley In this episode of Clinical Changemakers, Dr Jessica Morley, an AI ethic…
…
continue reading
1
AISN #63: California’s SB-53 Passes the Legislature
9:11
9:11
Play later
Play later
Lists
Like
Liked
9:11In this edition: California's legislature sent SB-53—the ‘Transparency in Frontier Artificial Intelligence Act’—to Governor Newsom's desk. If signed into law, California would become the first US state to regulate catastrophic risk. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. A note from Corin: I’m leaving the AI Safet…
…
continue reading
1
How Systems Thinking Can Fix Healthcare's Organizational Chaos | Dr Sharen Paine
1:11:55
1:11:55
Play later
Play later
Lists
Like
Liked
1:11:55"We want a two-way flow of communication so that we have a better understanding through the levels of an organisation up through those levels of what's actually happening and we can make decisions closer to the ground." — Dr. Sharen Paine In this episode of Clinical Changemakers, Dr. Sharen Paine, a systems thinking expert with a doctorate in busin…
…
continue reading
1
AISN #62: Big Tech Launches $100 Million pro-AI Super PAC
10:16
10:16
Play later
Play later
Lists
Like
Liked
10:16Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases. In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secret…
…
continue reading
1
AI's Jagged Frontier and Why Human Judgement Still Matters | Dr Graham Walker (Kaiser Permanente)
54:47
54:47
Play later
Play later
Lists
Like
Liked
54:47"I think judgment, I've been honing in on that word more frequently recently because I feel like the judgment piece is the piece that feels particularly like human in this decision." — Dr Graham Walker In this episode of Clinical Changemakers, Dr Graham Walker, an ER doctor and AI healthcare leader, discusses his role at Kaiser Permanente and the c…
…
continue reading
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: OpenAI releases GPT-5. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Releases GPT-5 Ever since GPT-4's release in March 2023 marked a step-change improvem…
…
continue reading
Also: ChatGPT Agent and IMO Gold. In this edition: The Trump Administration publishes its AI Action Plan; OpenAI released ChatGPT Agent and announced that an experimental model achieved gold medal-level performance on the 2025 International Mathematical Olympiad. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The AI Actio…
…
continue reading
1
AISN #59: EU Publishes General-Purpose AI Code of Practice
9:23
9:23
Play later
Play later
Lists
Like
Liked
9:23Plus: Meta Superintelligence Labs. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts…
…
continue reading
1
AISN #58: Senate Removes State AI Regulation Moratorium
9:04
9:04
Play later
Play later
Lists
Like
Liked
9:04Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use. In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use. Listen to the AI Safety Newsletter for free on Spotify…
…
continue reading
1
The Rise, Fall, and AI-Powered Rebirth of Evidence-Based Medicine | Dr. Richard Lehman & Dr. Raj Mehta
1:02:47
1:02:47
Play later
Play later
Lists
Like
Liked
1:02:47“You know, what if they were to actually put it’s [AI] mind to a science of practical compassion for everybody?… if the right machines were to come along and help us do it, that's going to be a fabulous thing.” Dr Richard Lehman is a retired GP from Oxfordshire who had a "ringside seat" to the birth of evidence-based medicine, previously held acade…
…
continue reading
1
Scaling Evidence-Based Medicine Across 630,000 sq Miles | Dr Raj Srivastava (Chief Clinical Programs Officer, Intermountain Health)
43:48
43:48
Play later
Play later
Lists
Like
Liked
43:48"When the frontline feels we're actually offering, 'what do you need? What are the resources we can help?' We'll co-create. Of course, they don't have the control of the resources or some decisions, but that's what executives can do." Dr. Raj Srivastava is a pediatrician, health system leader, and implementation science researcher, serving as Chief…
…
continue reading
In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The RAISE Act New York may soon become the first state to regulate frontier AI systems. On June 12, the state's legislature passed the Responsible A…
…
continue reading
1
How One Doctor's Career Pivot Inspired 25,000 Others to Rethink Medicine | Dr. Amandeep Hansra (Founder, Investor & Chief Clinical Adviser)
48:57
48:57
Play later
Play later
Lists
Like
Liked
48:57"At the end of the day, we don't really invest as much into products as we do into people. It's the people behind the products that are going to make the product successful" Dr. Amandeep Hansra is a general practitioner turned health tech entrepreneur, advisor, and investor, founder of the Creative Careers in Medicine community (with over 25,000 me…
…
continue reading
1
A Philosopher-Physician's Fight To Reclaim Medicine's Soul | Dr. Vikas Saini (President of Lown Institute)
49:26
49:26
Play later
Play later
Lists
Like
Liked
49:26"The enterprise of medicine has both scientific and moral dimensions, and they're inextricably balanced" Dr Vikas Saini is a Cardiologist and President of the Lown Institute, where he leads a non-partisan think tank advocating bold ideas for a just and caring system for health. With a unique background combining philosophy and medicine, Dr. Saini h…
…
continue reading
Plus, Opus 4 Demonstrates the Fragility of Voluntary Governance. In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic's Claude Opus 4 demonstrates the danger of relying on voluntary governance. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Google Releases Veo 3 …
…
continue reading
1
Goal-Oriented Healthcare: Breaking Free from the Problem-Focused Paradigm | Dr. James Mold (University of Oklahoma)
44:25
44:25
Play later
Play later
Lists
Like
Liked
44:25"The way to prevent cascades is to keep your eye on the goal, to understand what you're trying to accomplish and not to go down rabbit holes chasing abnormalities, but rather to be focused on the outcome that you're looking for." Dr James W. Mold is a family medicine physician, geriatrician, researcher and academic with a Master of Public Health de…
…
continue reading
1
Networks, Culture & Safety: How to Build Effective Healthcare Organizations | Prof. Ingrid Nembhard (Wharton School)
45:48
45:48
Play later
Play later
Lists
Like
Liked
45:48"We need to pay more attention to the networks that operate between people and the networks that operate between organisations.” Professor Ingrid Nembhard is an Organisational Behaviour expert in healthcare systems, based at the Wharton School of the University of Pennsylvania. Her research focuses on how characteristics of health care organisation…
…
continue reading
1
AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
9:18
9:18
Play later
Play later
Lists
Like
Liked
9:18Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption. In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from reg…
…
continue reading
Plus, AI Safety Collaboration in Singapore. In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Updates Restructure Plan On May 5th, OpenAI announ…
…
continue reading
1
Liberate Health Data and Escape the EHR Trap | Dr Sidharth Ramesh (Medblocks Founder)
45:20
45:20
Play later
Play later
Lists
Like
Liked
45:20"Epic charges researchers to access the data that the same Institute has spent thousands of hours storing. They're not a good custodian of data... they're literally locking your data away and holding it for ransom." — Dr. Sidharth Ramesh Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr Sidharth Ramesh is a medical doctor,…
…
continue reading
1
AISN #53: An Open Letter Attempts to Block OpenAI Restructuring
10:39
10:39
Play later
Play later
Lists
Like
Liked
10:39Plus, SafeBench Winners. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: Experts and ex-employees urge the Attorneys General of California and Delaware to block OpenAI's for-profit restructure; CAIS announces the winners of its safety be…
…
continue reading
1
From Evidence to Exit: Building Credible Health Tech | Dr Saira Ghafur (Co-Founder of Provea Health & Lead for Digital Health at Imperial College)
40:56
40:56
Play later
Play later
Lists
Like
Liked
40:56“The market was flooded with all these new tools and technologies that people were using, but no real evidence base behind them. Now we’re in this space where we need evidence. — Dr Saira Ghafur Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr Saira Ghafur is a medical doctor specialising in respiratory medicine, an acade…
…
continue reading
Plus, AI-Enabled Coups. In this edition: AI now outperforms human experts in specialized virology knowledge in a new benchmark; A new report explores the risk of AI-enabled coups. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. An Expert Virology Benchmark A team of researchers (primarily from SecureBio and CAIS) has devel…
…
continue reading
Plus, AI 2027. In this newsletter, we cover the launch of AI Frontiers, a new forum for expert commentary on the future of AI. We also discuss AI 2027, a detailed scenario describing how artificial superintelligence might emerge in just a few years. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. AI Frontiers Last week, CA…
…
continue reading
1
Customer Obsession in Healthcare: Primary Care Redefined | Dr Raj Behal (Amazon One Medical)
47:40
47:40
Play later
Play later
Lists
Like
Liked
47:40"I think of quality as what is the patient's ‘job to be done’. What are they trying to get out of this encounter or from their help? What are they trying to achieve? Making quality about that." — Dr Raj Behal. Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr. Raj Behal is a medical doctor and Chief Quality Officer at Amaz…
…
continue reading
Plus, Detecting Misbehavior in Reasoning Models. In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought. Listen to the AI Safety Newsletter for …
…
continue reading
Plus, Detecting Misbehavior in Reasoning Models. In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought. Listen to the AI Safety Newsletter for …
…
continue reading
1
The Economic Imperative: Why Health Equity Matters for Everyone | Prof. Bola Owolabi (NHS England Director)
48:34
48:34
Play later
Play later
Lists
Like
Liked
48:34"The greatest legacy we can leave as this generation of leaders is to make meaningful progress in narrowing those health inequalities between communities." — Prof Bola Owolabi. Episode Overview In this episode, Professor Bola Owolabi, GP and National Director for Healthcare Inequalities at NHS England, shares her insights on health inequalities in …
…
continue reading
1
Money, Power, Health: How Corporations Shape Our Health | Dr. Nason Maani (Commercial Determinants Researcher)
50:01
50:01
Play later
Play later
Lists
Like
Liked
50:01"The health impact of corporate power isn't just about products—it's about how commercial interests shape the entire landscape of health policy and research." — Dr Nason Maani Listen now on Apple, Spotify, YouTube or wherever you get your podcasts. Dr Nason Maani, lecturer in Inequalities and Global Health Policy at the University of Edinburgh, aut…
…
continue reading
Plus, Measuring AI Honesty. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this newsletter, we discuss two recent papers: a policy paper on national security strategy, and a technical paper on measuring honesty in AI systems. Listen to the AI Safety …
…
continue reading
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a s…
…
continue reading
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a s…
…
continue reading
1
Science Communication: Why the Basics Matter in an Age of Sound Bites | Dr Aaron Carroll (CEO of Academy Health)
45:15
45:15
Play later
Play later
Lists
Like
Liked
45:15"I'd rather have people understand why they should believe something, not just that they should." — Dr Aaron Carroll Listen now on Apple, Spotify, YouTube and or wherever you get your podcasts. Dr Aaron Carroll, pediatrician, professor, president and CEO of Academy Health, and renowned science communicator, discusses the art and science of effectiv…
…
continue reading
1
AISN #48: Utility Engineering and EnigmaEval
8:56
8:56
Play later
Play later
Lists
Like
Liked
8:56Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. In this newsletter, we explore two recent papers from CAIS. We’d also like to highlight that CAIS is hiring for editorial and writin…
…
continue reading
Plus, State-Sponsored AI Cyberattacks. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Reasoning Models DeepSeek-R1 has been one of the most significant model releases since ChatGPT. After its release, the DeepSeek's app quickly rose to the top of Apple's most downloaded chart and NVIDIA saw a 17% stock decline. In this st…
…
continue reading
Plus, Humanity's Last Exam, and the AI Safety, Ethics, and Society Course. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Transition The transition from the Biden to Trump administrations saw a flurry of executive activity on AI policy, with Biden signing several last-minute executive orders and Trump revoking Biden's…
…
continue reading
1
AISN #45: Center for AI Safety 2024 Year in Review
11:31
11:31
Play later
Play later
Lists
Like
Liked
11:31As 2024 draws to a close, we want to thank you for your continued support for AI safety and review what we’ve been able to accomplish. In this special-edition newsletter, we highlight some of our most important projects from the year. The mission of the Center for AI Safety is to reduce societal-scale risks from AI. We focus on three pillars of wor…
…
continue reading
Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Trump Circle on AI Safety The incoming Trump administration is likely to significantly alter the US gover…
…
continue reading
1
AISN #43: White House Issues First National Security Memo on AI
14:55
14:55
Play later
Play later
Lists
Like
Liked
14:55Plus, AI and Job Displacement, and AI Takes Over the Nobels. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. White House Issues First National Security Memo on AI On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Gov…
…
continue reading
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 …
…
continue reading
1
AISN #41: The Next Generation of Compute Scale
11:59
11:59
Play later
Play later
Lists
Like
Liked
11:59Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—…
…
continue reading
Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has g…
…
continue reading
1
AISN #39: Implications of a Trump Administration for AI Policy
12:00
12:00
Play later
Play later
Lists
Like
Liked
12:00Plus, Safety Engineering Overview. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Implications of a Trump administration for AI policy Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this…
…
continue reading
1
AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI
10:31
10:31
Play later
Play later
Lists
Like
Liked
10:31Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Supreme Court Decision Could Limit Federal Ability to Regulate AI In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we …
…
continue reading
1
AISN #37: US Launches Antitrust Investigations
11:02
11:02
Play later
Play later
Lists
Like
Liked
11:02US Launches Antitrust Investigations The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation …
…
continue reading
1
AISN #36: Voluntary Commitments are Insufficient
10:09
10:09
Play later
Play later
Lists
Like
Liked
10:09Voluntary Commitments are Insufficient AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments. Some commit…
…
continue reading