Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Brian Jenney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Jenney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#276 - Claude Code Failed: What Anthropic’s Postmortem Means for Developers

16:18
 
Share
 

Manage episode 509935720 series 3399111
Content provided by Brian Jenney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Jenney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Ever had your AI pair programmer stop helping and start breaking everything? I did—and this time, the data proves it wasn’t just me.

Claude fell off.

TypeScript that wouldn’t compile, migrations stuck in loops, refactors that went completely sideways. Turns out Anthropic’s own postmortem revealed three separate bugs causing degraded output—context routing issues, output corruption, and TLA-X blah blah blah error.

Let's dive in.

Send us a text

Shameless Plugs

🧑‍💻 Join Parsity - Become a full stack AI developer in 6-9 months.

✉️ Got a question you want answered on the pod? Drop it here

Zubin's LinkedIn (ex-lawyer, former Googler, Brian-look-a-like)

  continue reading

Chapters

1. Setting The Stage: Claude Felt Off (00:00:00)

2. Real-World Failures And Frustration (00:01:03)

3. Postmortem Overview And Trust (00:03:43)

4. Rumors vs Reality Of Downgrades (00:04:34)

5. The Three Bugs Explained Simply (00:05:33)

6. Non‑Determinism And Sampling Pitfalls (00:09:17)

7. Black Boxes And Developer Risk (00:11:02)

8. Practical Guardrails For Using AI (00:12:45)

9. Culture, Context, And Code Quality (00:15:06)

284 episodes

Artwork
iconShare
 
Manage episode 509935720 series 3399111
Content provided by Brian Jenney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Jenney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Ever had your AI pair programmer stop helping and start breaking everything? I did—and this time, the data proves it wasn’t just me.

Claude fell off.

TypeScript that wouldn’t compile, migrations stuck in loops, refactors that went completely sideways. Turns out Anthropic’s own postmortem revealed three separate bugs causing degraded output—context routing issues, output corruption, and TLA-X blah blah blah error.

Let's dive in.

Send us a text

Shameless Plugs

🧑‍💻 Join Parsity - Become a full stack AI developer in 6-9 months.

✉️ Got a question you want answered on the pod? Drop it here

Zubin's LinkedIn (ex-lawyer, former Googler, Brian-look-a-like)

  continue reading

Chapters

1. Setting The Stage: Claude Felt Off (00:00:00)

2. Real-World Failures And Frustration (00:01:03)

3. Postmortem Overview And Trust (00:03:43)

4. Rumors vs Reality Of Downgrades (00:04:34)

5. The Three Bugs Explained Simply (00:05:33)

6. Non‑Determinism And Sampling Pitfalls (00:09:17)

7. Black Boxes And Developer Risk (00:11:02)

8. Practical Guardrails For Using AI (00:12:45)

9. Culture, Context, And Code Quality (00:15:06)

284 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play