The “Nano Banana” Moment, GPT-5 Reality Check & How to Win with AI | Runpoint Ep. 7
Manage episode 506769673 series 3662001
Matthew Hall and Sam Gaddis break down Google’s new image model (“Nano Banana”) with real tests (thumbnails, interior wallpaper, character persistence), give a no-BS GPT-5 reality check vs Claude Code, and unpack MIT’s State of AI in Business 2025—including the viral “95% of AI projects fail” stat. We cut through the hype and share a practical framework to land in the winning 5%: build small/fast, keep an expert-in-the-loop, measure outcomes, and forward-deploy an “AI nerd” to sit with your operators. We also talk browser agents (Claude for Chrome), throttling/caps, OpenAI’s CLI, and two personal builds (an E*TRADE API portfolio snapshot and a fantasy-draft helper).
Chapters
00:00 Intro
00:32 Google’s “Nano Banana” image model—why it feels like a Photoshop killer
02:40 Real tests: thumbnails, character persistence, interior wallpapering
05:58 GPT-5 hype vs reality; coding speed vs chat experience
10:52 OpenAI Codecs & CLI vs Claude Code (features, trade-offs)
12:26 Anthropic caps/throttling—what changed and why it matters
14:22 Browser agents (Claude for Chrome): promise vs practical limits
20:01 MIT report: “95% fail” explained—what the data actually says
27:55 Adoption ≠ transformation; back-office beats front-office (for now)
34:21 The winning playbook: build small/fast, expert-in-the-loop, “shadow AI,” forward-deploy talent
43:02 What we’re excited about: E*TRADE API snapshot, fantasy draft tool, Nano Banana
45:38 Wrap
Key takeaways
Build small, ship fast, iterate.
Expert-in-the-loop to fully autonomous (for ROI today).
Back-office automations quietly print value.
Measure quality & cycle-time, not just topline ROI.
Tags
#AI #GPT5 #Claude #GoogleAI #Automation #EnterpriseAI #RunpointPodcast
7 episodes