Is AI a Bubble? Gary Marcus on GPT-5 Hype and the Future of AI
Manage episode 508137087 series 3470923
Has OpenAI made the WRONG bet?
Gary Marcus argues that OpenAI is taking the wrong approach - and the AI bubble is real. We walk through why GPT-5 underwhelmed, where the scaling paradigm breaks, and what might really get us to AGI.
Gary explains:
– Why the economics don’t add up for current AI
– Why GPT-5 isn’t as good as expected
– The core LLM limitations and why the scaling paradigm fails
– Why AI won’t take your job in the near term
– A practical path to AGI (hybrid / neuro-symbolic, world models)
We also debate whether investors are over- or under-valuing AI, what productivity gains are real, and how long it will take before AI truly replaces jobs.
References discussed in this episode:
– Gary Marcus, The Algebraic Mind: Integrating Connectionism and Cognitive Science (2001)
– Gary Marcus, “Deep Learning Is Hitting a Wall” (Nautilus, 2022)
– Gary Marcus, The Next Decade in AI (arXiv, 2020)
– Mike Dash, Tulipomania
Do you think Gary is right—or are we just getting started? Drop one strong piece of evidence either way (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society.
Chapters
00:00 – Is AI in a bubble?
00:30 – AI hype vs. reality
01:10 – GPT-5 launch: disappointment or progress?
03:50 – Can AI be both a revolution and a bubble?
05:40 – Productivity gains and investment hype
08:00 – Gary Marcus joins the conversation
09:30 – Why Gary calls himself a skeptic
11:15 – GPT-5 and the limits of scaling
14:00 – Financial reality of large language models
17:20 – “Deep Learning Is Hitting a Wall”
19:00 – Why hallucinations won’t go away
21:00 – Neuro-symbolic AI explained
24:00 – Building world models for AI
27:00 – Are AI valuations sustainable?
29:30 – Lessons from Tulipomania
31:30 – Will AI take all our jobs?
36:00 – What comes next for AI research
38:30 – Final thoughts
#OpenAI #GaryMarcus #ai #GPT5 #scaling #podcast
37 episodes