Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading
LessWrong Podcasts
A conversational podcast for aspiring rationalists.
…
continue reading
We live in a world where our civilization and daily lives depend upon institutions, infrastructure, and technological substrates that are _complicated_ but not _unknowable_. Join Patrick McKenzie (patio11) as he discusses how decisions, technology, culture, and incentives shape our finance, technology, government, and more, with the people who built (and build) those Complex Systems.
…
continue reading
Welcome to the Heart of the Matter, a series in which we share conversations with inspiring and interesting people and dive into the core issues or motivations behind their work, their lives, and their worldview. Coming to you from somewhere in the technosphere with your hosts Bryan Davis and Jay Kannaiyan.
…
continue reading
1
“EU explained in 10 minutes” by Martin Sustrik
16:47
16:47
Play later
Play later
Lists
Like
Liked
16:47If you want to understand a country, you should pick a similar country that you are already familiar with, research the differences between the two and there you go, you are now an expert. But this approach doesn’t quite work for the European Union. You might start, for instance, by comparing it to the United States, assuming that EU member countri…
…
continue reading
1
Bits and bricks: Oliver Habryka on LessWrong, LightHaven, and community infrastructure
1:14:44
1:14:44
Play later
Play later
Lists
Like
Liked
1:14:44Patrick McKenzie (patio11) is joined by Oliver Habryka, who runs Lightcone Infrastructure—the organization behind both the LessWrong forum and the Lighthaven conference venue in Berkeley. They explore how LessWrong became one of the most intellectually consequential forums on the internet, the surprising challenges of running a hotel with fractal g…
…
continue reading
I recently visited my girlfriend's parents in India. Here is what that experience taught me: Yudkowsky has this facebook post where he makes some inferences about the economy after noticing two taxis stayed in the same place while he got his groceries. I had a few similar experiences while I was in India, though sadly I don't remember them in enoug…
…
continue reading
1
[Linkpost] “Consider donating to AI safety champion Scott Wiener” by Eric Neyman
2:35
2:35
Play later
Play later
Lists
Like
Liked
2:35This is a link post. Written in my personal capacity. Thanks to many people for conversations and comments. Written in less than 24 hours; sorry for any sloppiness. It's an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.…
…
continue reading
1
“Which side of the AI safety community are you in?” by Max Tegmark
4:18
4:18
Play later
Play later
Lists
Like
Liked
4:18In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps: Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build i…
…
continue reading
There's an argument I sometimes hear against existential risks, or any other putative change that some are worried about, that goes something like this: 'We've seen time after time that some people will be afraid of any change. They'll say things like "TV will destroy people's ability to read", "coffee shops will destroy the social order","machines…
…
continue reading
1
Talking to the Bank of England about systemic risk and systems engineering
1:31:37
1:31:37
Play later
Play later
Lists
Like
Liked
1:31:37Patrick McKenzie (@patio11) shares his remarks to the Bank of England on critical vulnerabilities in financial infrastructure. Drawing from the July 2024 CrowdStrike outage which brought down teller systems at major US banks, Patrick discusses how regulatory guidance inadvertently created dangerous software monocultures. He also examines the stable…
…
continue reading
1
“Do One New Thing A Day To Solve Your Problems” by Algon
3:21
3:21
Play later
Play later
Lists
Like
Liked
3:21People don't explore enough. They rely on cached thoughts and actions to get through their day. Unfortunately, this doesn't lead to them making progress on their problems. The solution is simple. Just do one new thing a day to solve one of your problems. Intellectually, I've always known that annoying, persistent problems often require just 5 secon…
…
continue reading
1
“Humanity Learned Almost Nothing From COVID-19” by niplav
8:45
8:45
Play later
Play later
Lists
Like
Liked
8:45Summary: Looking over humanity's response to the COVID-19 pandemic, almostsix years later, reveals that we've forgotten to fulfill our intent atpreparing for the next pandemic. I rant. content warning: A single carefully placed slur. If we want to create a world free of pandemics and other biologicalcatastrophes, the time to act is now. —US White H…
…
continue reading
1
“Consider donating to Alex Bores, author of the RAISE Act” by Eric Neyman
50:28
50:28
Play later
Play later
Lists
Like
Liked
50:28Written by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments. Over the last several years, I’ve written a bunch of posts about politics and political donations. In this post, I’ll tell you about one of the best donation opportunities that I’ve ever…
…
continue reading
Here's a story I've heard a couple of times. A youngish person is looking for some solutions to their depression, chronic pain, ennui or some other cognitive flaw. They're open to new experiences and see a meditator gushing about how amazing meditation is for joy, removing suffering, clearing one's mind, improving focus etc. They invite the young p…
…
continue reading
"I heard Chen started distilling the day after he was born. He's only four years old, if you can believe it. He's written 18 novels. His first words were, "I'm so here for it!" Adrian said. He's my little brother. Mom was busy in her world model. She says her character is like a "villainess" or something - I kinda worry it's a sex thing. It's for s…
…
continue reading
1
“The ‘Length’ of ‘Horizons’” by Adam Scholl
14:15
14:15
Play later
Play later
Lists
Like
Liked
14:15Current AI models are strange. They can speak—often coherently, sometimes even eloquently—which is wild. They can predict the structure of proteins, beat the best humans at many games, recall more facts in most domains than human experts; yet they also struggle to perform simple tasks, like using computer cursors, maintaining basic logical consiste…
…
continue reading
1
Narrative, mastery, and character bleed in games, with Ricki Heicklen
1:31:19
1:31:19
Play later
Play later
Lists
Like
Liked
1:31:19Patrick McKenzie (patio11) is joined again by Ricki Heicklen to discuss Metagame 2025, a conference where 250 attendees were divided into Purple and Orange teams competing for territories across campus. Patrick built a complete roguelike RPG in 25 days using LLMs, discovering that providing minimal world-building context transformed generic fantasy…
…
continue reading
Jay talks with us about finding Alpha – returns above the base rate – in every day life (and what this means). LINKS Optimize Everything, Jay’s substack Jay on Twitter Arbor Trading Bootcamp Kelsey’s argument that We Need To Be Able To Sue AI Companies 00:00:05 – Alpha with Jay 01:28:53 – Guild of the Rose 01:31:00 – Miscellanea 01:40:58 – Thank th…
…
continue reading
About half a year ago, I decided to try stop insulting myself for two weeks. No more self-deprecating humour, calling myself a fool, or thinking I'm pathetic. Why? Because it felt vaguely corrosive. Let me tell you how it went. Spoiler: it went well. The first thing I noticed was how often I caught myself about to insult myself. It happened like mu…
…
continue reading
1
“If Anyone Builds It Everyone Dies, a semi-outsider review” by dvd
26:01
26:01
Play later
Play later
Lists
Like
Liked
26:01About me and this review: I don’t identify as a member of the rationalist community, and I haven’t thought much about AI risk. I read AstralCodexTen and used to read Zvi Mowshowitz before he switched his blog to covering AI. Thus, I’ve long had a peripheral familiarity with LessWrong. I picked up IABIED in response to Scott Alexander's review, and …
…
continue reading
1
“The Most Common Bad Argument In These Parts” by J Bostock
8:11
8:11
Play later
Play later
Lists
Like
Liked
8:11I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now. I call it Exhaustive Free Association. Exhaustive Free Associ…
…
continue reading
1
“Towards a Typology of Strange LLM Chains-of-Thought” by 1a3orn
17:34
17:34
Play later
Play later
Lists
Like
Liked
17:34Intro LLMs being trained with RLVR (Reinforcement Learning from Verifiable Rewards) start off with a 'chain-of-thought' (CoT) in whatever language the LLM was originally trained on. But after a long period of training, the CoT sometimes starts to look very weird; to resemble no human language; or even to grow completely unintelligible. Why might th…
…
continue reading
1
“I take antidepressants. You’re welcome” by Elizabeth
6:09
6:09
Play later
Play later
Lists
Like
Liked
6:09It's amazing how much smarter everyone else gets when I take antidepressants. It makes sense that the drugs work on other people, because there's nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else's, which is a heavy burden because some of ya’ll are terrible at life. You date the wrong people. You ta…
…
continue reading
1
“Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior” by Sam Marks
4:06
4:06
Play later
Play later
Lists
Like
Liked
4:06This is a link post for two papers that came out today: Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.) Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.) These papers both study the following idea[1]: preventing a model from …
…
continue reading
1
“Hospitalization: A Review” by Logan Riggs
18:52
18:52
Play later
Play later
Lists
Like
Liked
18:52I woke up Friday morning w/ a very sore left shoulder. I tried stretching it, but my left chest hurt too. Isn't pain on one side a sign of a heart attack? Chest pain, arm/shoulder pain, and my breathing is pretty shallow now that I think about it, but I don't think I'm having a heart attack because that'd be terribly inconvenient. But it'd also be …
…
continue reading
Sahil has been up to things. Unfortunately, I've seen people put effort into trying to understand and still bounce off. I recently talked to someone who tried to understand Sahil's project(s) several times and still failed. They asked me for my take, and they thought my explanation was far easier to understand (even if they still disagreed with it …
…
continue reading
Of course, you must understand, I couldn't be bothered to act. I know weepers still pretend to try, but I wasn't a weeper, at least not then. It isn't even dangerous, the teeth only sharp to its target. But it would not have been right, you know? That's the way things are now. You ignore the screams. You put on a podcast: two guys talking, two guys…
…
continue reading
1
“A non-review of ‘If Anyone Builds It, Everyone Dies’” by boazbarak
6:37
6:37
Play later
Play later
Lists
Like
Liked
6:37
…
continue reading
1
“Notes on fatalities from AI takeover” by ryan_greenblatt
15:46
15:46
Play later
Play later
Lists
Like
Liked
15:46Suppose misaligned AIs take over. What fraction of people will die? I'll discuss my thoughts on this question and my basic framework for thinking about it. These are some pretty low-effort notes, the topic is very speculative, and I don't get into all the specifics, so be warned. I don't think moderate disagreements here are very action-guiding or …
…
continue reading
1
“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
21:59
21:59
Play later
Play later
Lists
Like
Liked
21:59I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"... ...then my current …
…
continue reading