Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Thomas Krendl Gilbert Podcasts

show episodes
 
Loading …
show series
 
An exciting week for Tom, who tells Nate about his company Hortus AI, whose mission is to make AI accountable to local communities. We cover a lot of classic Retort themes as Tom makes a case for what's missing from AI development, and how models could be more healthily integrated into everyday people's lives. Press release: https://hortus.ai/wp-co…
  continue reading
 
Tom and Nate sit down for a classic discussion of the role of AI in the modern philosophy of science. Much of this discussion is based on Thomas Samuel Kuhn's influential book The Structure of Scientific Revolutions. We ask -- is AI a science in the Kuhn'ian sense? Will the "paradigm" worldview apply to other sciences post AI? How will scientific i…
  continue reading
 
We're back! Tom and Nate catch up after the Thanksgiving holiday. Our main question was -- what were the biggest AI stories of the year? We touch on the core themes of the show: infrastructure, AI realities, and and antitrust. The power buildout to scale out AI is going to have very real long-term impacts. Some links this week: * Ben Thompson's, Th…
  continue reading
 
Tom and Nate catch up on the happenings in AI. Of course, we're focused on the biggest awards available to us as esteemed scientists (or something close enough) -- the Nobel Prizes! What does it mean in the trajectory of AI for Hinton and Hassabis to carry added scientific weight. Honestly, feels like a sinking ship. Some links: * Schmidhuber tweet…
  continue reading
 
Tom and Nate catch up on recent events (before the OpenAI o1 release) and opportunities in transparency/policy. We recap the legendary scam of Matt from IT department, why disclosing the outcomes of process is not enough, and more. This is a great episode on understanding why the process technology was birthed from is just as important as the outco…
  continue reading
 
Tom and Nate catch up on core themes of AI after a somewhat unintended summer break. We discuss the moral groundings and philosophy of what we're building, our travels, The Anxious Generation, AGI obsessions, an update on AI Ethics vs. AI Safety, and plenty more in between. As always, contact us at [email protected] Some links we mention in the epi…
  continue reading
 
Tom and Nate catch up on the rapidly evolving (and political) space of AI regulation. We cover CA SB 1047, recent policing of data scraping, presidential appointees, antitrust intention vs. implementation, FLOP thresholds, and everything else touching the future of large ML models. Nate's internet cut out, so this episode ends a little abruptly. Re…
  continue reading
 
Tom and Nate revisit one of their old ideas -- AI through the lens of public health infrastructure, and especially alignment. Sorry about Tom's glitchy audio, I figured it out after the fact that he was talking into the microphone at the wrong angle. Regardless, here are some links for this week. Links: - Data foundry for AI https://scale.com/blog/…
  continue reading
 
Tom and Nate caught up last week (sorry for the editing delay) on the big two views of the AI future: Apple Intelligence and Situational Awareness (Nationalistic AI doom prevention). One of our best episodes, here are the links: * The Kekulé Problem https://en.wikipedia.org/wiki/The_Kekul%C3%A9_Problem * Truth and Method https://en.wikipedia.org/wi…
  continue reading
 
Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal nonsense, and reader feedback! A bit of a mailbag. Enjoy. 00:00 Murky waters in AI policy 00:33 The Senate AI Roadmap 05:14 The Executive Branch Takes the Lead 08:33 California's Sena…
  continue reading
 
Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.html This is a monumental week for AI. The product transition…
  continue reading
 
Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence. Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/16…
  continue reading
 
Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week. Links: Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg Capuchin monkey https://en.…
  continue reading
 
Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs.…
  continue reading
 
Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks. The Taylor mom…
  continue reading
 
Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's impact on AI and have a brief giveaway at the end. Cheers! More at retortai.com. Contact us at mail at domain. Some topics: - The pace of progress in AI and whether it feels meaningfu…
  continue reading
 
This week Tom and Nate cover all the big topics from the big picture lens. Sora, Gemini 1.5's context length, Gemini's bias backlash, Gemma open models, it was a busy week in AI. We come to the conclusion that we can no longer trust a lot of these big companies to do much. We are the gladiators playing the crowd of AI. This was a great one, I'm pro…
  continue reading
 
A metaphor episode! We are trying to figure how much the Waymo incident is or is not about AI. We bring back our Berkeley roots and talk about traditions in the Bay around distributed technology. Scooters and robots are not safe in this episode, sadly. Here's the link to the Verge piece Tom read from: https://www.theverge.com/2024/2/11/24069251/way…
  continue reading
 
... and you should too. We catch up this week on all things Apple Vision Pro and how these devices will intersect with AI. It really turned more into a commentary on the future of society, and how various technologies may or may not tap into our subconscious. The only link we've got for you is DeepDream: https://en.wikipedia.org/wiki/DeepDream This…
  continue reading
 
Wow, one of our favorites. This week Tom and Nate have a lot to cover. We cover AI2's new OPEN large language models (OLMo) and all that means, the alchemical model merging craze powering waifu factories, model weight leaks from Mistral, the calling card for our loyal fans, and more topics. We have a lot of links you'll enjoy as you'll go through i…
  continue reading
 
We recovered this episode from the depth of lost podcast recordings! We carry on and Tom tells the story of his wonderful sociology turned AI Ph.D. at Berkeley. This comes with plenty of great commentary on the current state of the field and striving for impact. We cover the riverbank of Vienna, the heart of the sperm whale, and deep life lessons. …
  continue reading
 
This week Tom and Nate catch up on two everlasting themes of ML: compute and evaluation. We chat about AI2, Zuck's GPUs, evaluation as procurement, NIST comments, neglecting reward models, and plenty of other topics. We're on the tracks for 2024 and waiting for some things to happen. Links for what we covered this week: Zuck interview on The Verge …
  continue reading
 
We're excited to bring you something special today! Our first cross over episode brings some fresh energy to the podcast. Tom and Nate are joined by Jordan Schneider of ChinaTalk (A popular Substack-based publication covering all things China https://www.chinatalk.media/). We cover lots of great ground here, from the economics of Hirschman to the c…
  continue reading
 
Tom and Nate are ready to kick off the year, but not too ready! There's a ton to be excited about this year, but we're already worried for some parts of it. In this episode, we'll teach you how to be mindful of the so called "other side of ML". Some links: - Link to NYT lawsuit techdirt article https://www.techdirt.com/2023/12/28/the-ny-times-lawsu…
  continue reading
 
The end of the year is upon us! Tom and Nate bring a reflective mood to the podcast along with some surprises that may be a delight. Here are some links for the loyal fans: * RAND + executive order piece: https://www.politico.com/news/2023/12/15/billionaire-backed-think-tank-played-key-role-in-bidens-ai-order-00132128 * Sam Altman's blog post we we…
  continue reading
 
No stone is left unturned on this episode. As the end of the year approaches, Tom and Nate check in on all the vibes of the machine learning world: torrents, faked demos, alchemy, weightlifting, actual science, and blogs are all not safe in this episode. Some links for your weekend: - AI Alliance: https://thealliance.ai/ - Evaluation gaming on Inte…
  continue reading
 
In this episode, Tom gives us a lesson on all things feedback, mostly where our scientific framings of it came from. Together, we link this to RLHF, our previous work in RL, and how we were thinking about agentic ML systems before it was cool. Join us, on another great blast from the past on The Retort! We also have brought you video this week! Thi…
  continue reading
 
We break down all the recent events of AI, and live react to some of the news about OpenAI's new super-method, codenamed Q*. From CEOs to rogue AI's, no one can be trusted in today's episode. Some links to relevant content on Interconnects: * Discussing how OpenAI's blunders open the doors for openness. * Detailing what Q* probably is. This is a pu…
  continue reading
 
We cover all things OpenAI as they embrace their role as a consumer technology company with their first developer keynote. Lots of links: Dev. day keynote https://www.youtube.com/watch?v=U9mJuUkhUzk Some papers we cover Multinational AGI consortium (by non technical folks) https://arxiv.org/abs/2310.09217 Frontier model risk paper that DC loves htt…
  continue reading
 
We discuss all the big regulation steps in AI this week, from the Biden Administration's Executive Order to the UK AI Safety Summit. Links: Link the Executive Order Link the Mozilla Open Letter The Slaughterbots video UK AI Safety Summit graph/meme This is a public episode. If you would like to discuss this with other subscribers or get access to b…
  continue reading
 
This week, we dunk on The Center for Research on Foundation Models's (Stanford) Foundation Model Transparency Index. Yes, the title is inspired by Taylor. Some links: The Index itself. And Nathan's critique. Anthropic's Collective Constitutional AI work, coverage in New York Times. New paper motivating transparency for reward models in RLHF. Jitend…
  continue reading
 
Tom and Nate sit down to discuss Marc Andreessen's Techno-Optimist Manifesto. A third wave of AI mindsets that squarely takes on both AI Safety and AI Ethics communities. Some links: * An example of the Shoggoth Monster we referenced. Thanks for listening! This is a public episode. If you would like to discuss this with other subscribers or get acc…
  continue reading
 
This week, Tom and Nate discuss some of the core and intriguing dynamics of AI. We discuss the history of the rationality movement and where Harry Potter fan fiction fits in, if AI will ever not feel hypey, the do's and don'ts of Sam Altman, and other topics. (Editor note: sorry for some small issues in Nate's audio. That will be fixed in the next …
  continue reading
 
This is a big one. Getting going on if LLMs should be more open or more closed. We cover everything, OpenAI, scaling, openness for openness sake (relative to OpenAI), actual arguments for open-source values in LLMs, AI as infrastructure, LLMs as platforms, what this means we need, and other topics. Lot's of related links this time from Nathan. Most…
  continue reading
 
Tom and Nate discuss a few core topics of the show. First, we touch base on the core of the podcast -- the difference between empirical science, alchemy, and magic. Next, we explain some of our deeper understandings of AI safety as a field, then that leads into a discussion of what RLHF means. Lot's of links to share this time: Tom's coverage on al…
  continue reading
 
Tom and Nate discuss some of the public institutions that form the bedrock of society -- education and roads -- and how AI is poised to shake them up. Some related reading on Interconnects, specifically about Tesla's system design and the self-driving roll-out in San Francisco. This is a public episode. If you would like to discuss this with other …
  continue reading
 
Tom and Nate discuss some of the most dominant metaphors in machine learning these days -- alchemy and deep learning's roots, the Oppenheimer film and a modern "Manhattan Project for AI", and of course, a sprinkle of AGI. Some related reading on Interconnects: https://www.interconnects.ai/p/ai-research-tensions-oppenheimer Thanks for listening! Rea…
  continue reading
 
Loading …
Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play