Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

LessWrong Podcasts

show episodes
 
We live in a world where our civilization and daily lives depend upon institutions, infrastructure, and technological substrates that are _complicated_ but not _unknowable_. Join Patrick McKenzie (patio11) as he discusses how decisions, technology, culture, and incentives shape our finance, technology, government, and more, with the people who built (and build) those Complex Systems.
  continue reading
 
Artwork

1
Heart of the Matter

Bryan Davis & Jay Kannaiyan

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
Welcome to the Heart of the Matter, a series in which we share conversations with inspiring and interesting people and dive into the core issues or motivations behind their work, their lives, and their worldview. Coming to you from somewhere in the technosphere with your hosts Bryan Davis and Jay Kannaiyan.
  continue reading
 
Loading …
show series
 
According to the Sonnet 4.5 system card, Sonnet 4.5 is much more likely than Sonnet 4 to mention in its chain-of-thought that it thinks it is being evaluated; this seems to meaningfully cause it to appear to behave better in alignment evaluations. So, Sonnet 4.5's behavioral improvements in these evaluations may partly be driven by growing tendency…
  continue reading
 
Patrick McKenzie (patio11) is joined by Oliver Habryka, who runs Lightcone Infrastructure—the organization behind both the LessWrong forum and the Lighthaven conference venue in Berkeley. They explore how LessWrong became one of the most intellectually consequential forums on the internet, the surprising challenges of running a hotel with fractal g…
  continue reading
 
I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology. Recently, jointly with Klaus Prettner, we’ve written a pa…
  continue reading
 
Patrick McKenzie (patio11) reads his Bits about Money essay on deposit insurance, explaining this critical financial infrastructure, with some thoughts on its performance during 2023. He covers what deposit insurance actually covers (and critically, what it doesn't), how fintech users often misunderstand their exposure to counterparty risk, and the…
  continue reading
 
[Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker. In the movie, Broderick's character accidentally gains access t…
  continue reading
 
Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to). But some problems are illegible (obscure or hard to understand, o…
  continue reading
 
1. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them. Wilderness survival, automotive re…
  continue reading
 
This is a link post. Eliezer Yudkowsky did not exactly suggest that you should eat bear fat covered with honey and sprinkled with salt flakes. What he actually said was that an alien, looking from the outside at evolution, would predict that you would want to eat bear fat covered with honey and sprinkled with salt flakes. Still, I decided to buy a …
  continue reading
 
As far as I'm aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI s…
  continue reading
 
This is a link post. New Anthropic research (tweet, blog post, paper): We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representa…
  continue reading
 
This is a link post. You have things you want to do, but there's just never time. Maybe you want to find someone to have kids with, or maybe you want to spend more or higher-quality time with the family you already have. Maybe it's a work project. Maybe you have a musical instrument or some sports equipment gathering dust in a closet, or there's so…
  continue reading
 
Crosspost from my blog. Synopsis When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements. As of 2025, LLM text does not have those elements behind it. Therefore LLM …
  continue reading
 
An Overture Famously, trans people tend not to have great introspective clarity into their own motivations for transition. Intuitively, they tend to be quite aware of what they do and don't like about inhabiting their chosen bodies and gender roles. But when it comes to explaining the origins and intensity of those preferences, they almost universa…
  continue reading
 
TL;DR: AI progress and the recognition of associated risks are painful to think about. This cognitive dissonance acts as fertile ground in the memetic landscape, a high-energy state that will be exploited by novel ideologies. We can anticipate cultural evolution will find viable successionist ideologies: memeplexes that resolve this tension by fram…
  continue reading
 
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not en…
  continue reading
 
I've created a highly specific and actionable privacy guide, sorted by importance and venturing several layers deep into the privacy iceberg. I start with the basics (password manager) but also cover the obscure (dodging the millions of Bluetooth tracking beacons which extend from stores to traffic lights; anti-stingray settings; flashing GrapheneO…
  continue reading
 
In this episode, Patrick McKenzie reads his essay about the financial infrastructure that makes buying windows painless. When a window installer can originate, underwrite, and fund a $25,000 loan in 15 minutes before leaving your house, it's because four parties—window companies, facilitating platforms, specialized banks, and capital providers—have…
  continue reading
 
There is a very famous essay titled ‘Reality has a surprising amount of detail’. The thesis of the article is that reality is filled, just filled, with an incomprehensible amount of materially important information, far more than most people would naively expect. Some of this detail is inherent in the physical structure of the universe, and the res…
  continue reading
 
We talk with Max Harms on the air for the first time since 2017! He’s got a new book coming out (pre-order your copy here or at Amazon) and we spend about the first half talking about If Anyone Builds It, Everyone Dies. LINKS Max’s first book, Crystal Society Eneasz’s audiobook of about the first two thirds of the first book And the official audiob…
  continue reading
 
There's a strong argument that humans should stop trying to build more capable AI systems, or at least slow down progress. The risks are plausibly large but unclear, and we’d prefer not to die. But the roadmaps of the companies pursuing these systems envision increasingly agentic AI systems taking over the key tasks of researching and building supe…
  continue reading
 
(23K words; best considered as nonfiction with a fictional-dialogue frame, not a proper short story.) Prologue: Klurl and Trapaucius were members of the machine race. And no ordinary citizens they, but Constructors: licensed, bonded, and insured; proven, experienced, and reputed. Together Klurl and Trapaucius had collaborated on such famed artifice…
  continue reading
 
If you want to understand a country, you should pick a similar country that you are already familiar with, research the differences between the two and there you go, you are now an expert. But this approach doesn’t quite work for the European Union. You might start, for instance, by comparing it to the United States, assuming that EU member countri…
  continue reading
 
I recently visited my girlfriend's parents in India. Here is what that experience taught me: Yudkowsky has this facebook post where he makes some inferences about the economy after noticing two taxis stayed in the same place while he got his groceries. I had a few similar experiences while I was in India, though sadly I don't remember them in enoug…
  continue reading
 
This is a link post. Written in my personal capacity. Thanks to many people for conversations and comments. Written in less than 24 hours; sorry for any sloppiness. It's an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.…
  continue reading
 
In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps: Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build i…
  continue reading
 
There's an argument I sometimes hear against existential risks, or any other putative change that some are worried about, that goes something like this: 'We've seen time after time that some people will be afraid of any change. They'll say things like "TV will destroy people's ability to read", "coffee shops will destroy the social order","machines…
  continue reading
 
Patrick McKenzie (@patio11) shares his remarks to the Bank of England on critical vulnerabilities in financial infrastructure. Drawing from the July 2024 CrowdStrike outage which brought down teller systems at major US banks, Patrick discusses how regulatory guidance inadvertently created dangerous software monocultures. He also examines the stable…
  continue reading
 
People don't explore enough. They rely on cached thoughts and actions to get through their day. Unfortunately, this doesn't lead to them making progress on their problems. The solution is simple. Just do one new thing a day to solve one of your problems. Intellectually, I've always known that annoying, persistent problems often require just 5 secon…
  continue reading
 
Loading …
Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play