Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Magnus Hedemark. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Magnus Hedemark or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Big Ideas So Far: AI, Consciousness, and Transformation at NYC's Deepest Tech Meetup

45:57
 
Share
 

Manage episode 493637799 series 3671813
Content provided by Magnus Hedemark. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Magnus Hedemark or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Show Notes: The Big Ideas So Far - AI, Consciousness, and Transformation

Episode Overview

Diving deep into a remarkable synthesis from NYC's New York Artificial Intelligence Meetup Group, where months of philosophical discussions about AI, consciousness, and human transformation came together in one evening. This retrospective reveals the big patterns emerging as we navigate unprecedented technological change.

Key Themes Explored

The Manifest vs Scientific Image Problem

  • How humans naturally perceive reality vs. how science reveals it works
  • Wilfrid Sellars' foundational framework from 1962
  • Why we struggle to understand AI systems through our everyday cognitive frameworks
  • The "rocks and clocks in a box" mental model vs. electromagnetic fields and curved spacetime

Evolution, Change, and Inflection Points

  • Stephen Jay Gould's punctuated equilibrium theory
  • Rapid bursts of change vs. long periods of stability
  • Are we approaching a similar inflection point with AI?
  • Ancient wisdom traditions that emerged during the Axial Age (800-200 BCE)

Beauty, Compression, and Machine Creativity

  • Jürgen Schmidhuber's compression progress theory of aesthetics
  • Why we find certain patterns beautiful (optimal compression ratios)
  • Could AI systems develop genuine aesthetic sense?
  • The difference between iconic, indexical, and symbolic signs

What Makes Something "Alive"?

  • Assembly Theory: measuring complexity by causal history
  • Lee Cronin and Sara Walker's approach to detecting life
  • Terence Deacon's three levels: homeodynamic, morphodynamic, teleodynamic
  • Why biological intelligence integrates design, computation, and manufacturing seamlessly

AI Risk Through a New Lens

  • "Terminator vs. Tinkerbell AI" framework
  • Optimization pressure and alignment challenges
  • The Physical Church-Turing Thesis and substrate independence
  • Why efficiency vs. capability matters for AGI development

Collective Intelligence and Scale Blindness

  • Michael Levin's bioelectric field research
  • Xenobots and non-traditional forms of agency
  • Intelligence operating from cellular to planetary scales
  • How we miss intelligence that doesn't look human-like

Notable Figures Referenced

  • Wilfrid Sellars - Philosopher, "manifest vs scientific image"
  • Stephen Jay Gould - Paleontologist, punctuated equilibrium
  • Jürgen Schmidhuber - AI researcher, compression theory of beauty
  • Charles Sanders Peirce - Philosopher, semiotics theory
  • Lee Cronin & Sara Walker - Assembly Theory developers
  • Terence Deacon - Anthropologist, teleodynamics
  • Michael Levin - Developmental biologist, bioelectric fields
  • Kenneth O. Stanley - AI researcher, fractured representations
  • Neil Gershenfeld - MIT physicist, fab labs

Technical Concepts Worth Unpacking

  • Context window problems in current AI
  • Fractured Entangled Representation Hypothesis
  • ARC AGI benchmarks and O3's $15-20K per problem cost
  • The autogen as minimal self-reproducing system
  • Bioelectric gradients overriding genetic programming

Philosophical Connections

  • Marcus Aurelius and Buddhist convergence on impermanence
  • Ship of Theseus paradox in the context of AI development
  • The role of tools in human cognitive evolution
  • Scale blindness and recognizing non-human intelligence

Questions for Discussion

  • Are we living through our own "punctuation" moment in history?
  • What happens when AI systems start optimizing for their own compression progress?
  • How do we align systems whose internal representations we can't decompose?
  • Could collective intelligence be the next frontier beyond individual AGI?

Community Context

This synthesis came from the New York Artificial Intelligence Meetup Group's special retrospective session, hosted by Tone Fonseca. The event brought together months of deep discussions into a cohesive framework for understanding our current moment of technological transformation.

For the full article and additional context, visit magnus919.com

  continue reading

12 episodes

Artwork
iconShare
 
Manage episode 493637799 series 3671813
Content provided by Magnus Hedemark. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Magnus Hedemark or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Show Notes: The Big Ideas So Far - AI, Consciousness, and Transformation

Episode Overview

Diving deep into a remarkable synthesis from NYC's New York Artificial Intelligence Meetup Group, where months of philosophical discussions about AI, consciousness, and human transformation came together in one evening. This retrospective reveals the big patterns emerging as we navigate unprecedented technological change.

Key Themes Explored

The Manifest vs Scientific Image Problem

  • How humans naturally perceive reality vs. how science reveals it works
  • Wilfrid Sellars' foundational framework from 1962
  • Why we struggle to understand AI systems through our everyday cognitive frameworks
  • The "rocks and clocks in a box" mental model vs. electromagnetic fields and curved spacetime

Evolution, Change, and Inflection Points

  • Stephen Jay Gould's punctuated equilibrium theory
  • Rapid bursts of change vs. long periods of stability
  • Are we approaching a similar inflection point with AI?
  • Ancient wisdom traditions that emerged during the Axial Age (800-200 BCE)

Beauty, Compression, and Machine Creativity

  • Jürgen Schmidhuber's compression progress theory of aesthetics
  • Why we find certain patterns beautiful (optimal compression ratios)
  • Could AI systems develop genuine aesthetic sense?
  • The difference between iconic, indexical, and symbolic signs

What Makes Something "Alive"?

  • Assembly Theory: measuring complexity by causal history
  • Lee Cronin and Sara Walker's approach to detecting life
  • Terence Deacon's three levels: homeodynamic, morphodynamic, teleodynamic
  • Why biological intelligence integrates design, computation, and manufacturing seamlessly

AI Risk Through a New Lens

  • "Terminator vs. Tinkerbell AI" framework
  • Optimization pressure and alignment challenges
  • The Physical Church-Turing Thesis and substrate independence
  • Why efficiency vs. capability matters for AGI development

Collective Intelligence and Scale Blindness

  • Michael Levin's bioelectric field research
  • Xenobots and non-traditional forms of agency
  • Intelligence operating from cellular to planetary scales
  • How we miss intelligence that doesn't look human-like

Notable Figures Referenced

  • Wilfrid Sellars - Philosopher, "manifest vs scientific image"
  • Stephen Jay Gould - Paleontologist, punctuated equilibrium
  • Jürgen Schmidhuber - AI researcher, compression theory of beauty
  • Charles Sanders Peirce - Philosopher, semiotics theory
  • Lee Cronin & Sara Walker - Assembly Theory developers
  • Terence Deacon - Anthropologist, teleodynamics
  • Michael Levin - Developmental biologist, bioelectric fields
  • Kenneth O. Stanley - AI researcher, fractured representations
  • Neil Gershenfeld - MIT physicist, fab labs

Technical Concepts Worth Unpacking

  • Context window problems in current AI
  • Fractured Entangled Representation Hypothesis
  • ARC AGI benchmarks and O3's $15-20K per problem cost
  • The autogen as minimal self-reproducing system
  • Bioelectric gradients overriding genetic programming

Philosophical Connections

  • Marcus Aurelius and Buddhist convergence on impermanence
  • Ship of Theseus paradox in the context of AI development
  • The role of tools in human cognitive evolution
  • Scale blindness and recognizing non-human intelligence

Questions for Discussion

  • Are we living through our own "punctuation" moment in history?
  • What happens when AI systems start optimizing for their own compression progress?
  • How do we align systems whose internal representations we can't decompose?
  • Could collective intelligence be the next frontier beyond individual AGI?

Community Context

This synthesis came from the New York Artificial Intelligence Meetup Group's special retrospective session, hosted by Tone Fonseca. The event brought together months of deep discussions into a cohesive framework for understanding our current moment of technological transformation.

For the full article and additional context, visit magnus919.com

  continue reading

12 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play