Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Seth Litt Podcasts

show episodes
 
The CEO and president of a 3pl discuss the everyday grind, hustle and business strategy of building a million dollar business in the logistics sector. We are still in our 20’s and thought it would be cool to share some insight into our world and hopefully help some people out along the way!
  continue reading
 
Loading …
show series
 
Geoffrey Litt is a design engineer at Notion working on malleable software: computing environments where anyone can adapt their software to meet their needs and their lives. Before joining Notion, he was a researcher at the independent lab, Ink & Switch, where he explored the future of computing. He did his PhD at MIT on programming interfaces. Mos…
  continue reading
 
Welcome back to Generally Intelligent! We’re excited to relaunch our podcast—still featuring thoughtful conversations on building AI, but now with an expanded lens on its economic, societal, political, and human impacts. Matt Boulos leads policy and safety at Imbue, where he shapes the responsible development of AI coding tools that make software c…
  continue reading
 
Rylan Schaeffer is a PhD student at Stanford studying the engineering, science, and mathematics of intelligence. He authored the paper “Are Emergent Abilities of Large Language Models a Mirage?”, as well as other interesting refutations in the field that we’ll talk about today. He previously interned at Meta on the Llama team, and at Google DeepMin…
  continue reading
 
Ari Morcos is the CEO of DatologyAI, which makes training deep learning models more performant and efficient by intervening on training data. He was at FAIR and DeepMind before that, where he worked on a variety of topics, including how training data leads to useful representations, lottery ticket hypothesis, and self-supervised learning. His work …
  continue reading
 
Percy Liang is an associate professor of computer science and statistics at Stanford. These days, he’s interested in understanding how foundation models work, how to make them more efficient, modular, and robust, and how they shift the way people interact with AI—although he’s been working on language models for long before foundation models appear…
  continue reading
 
Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good and just AI future. Generally Intelligent is a podca…
  continue reading
 
Tri Dao is a PhD student at Stanford, co-advised by Stefano Ermon and Chris Re. He’ll be joining Princeton as an assistant professor next year. He works at the intersection of machine learning and systems, currently focused on efficient training and long-range context. About Generally Intelligent We started Generally Intelligent because we believe …
  continue reading
 
Jamie Simon is a 4th year Ph.D. student at UC Berkeley advised by Mike DeWeese, and also a Research Fellow with us at Generally Intelligent. He uses tools from theoretical physics to build fundamental understanding of deep neural networks so they can be designed from first-principles. In this episode, we discuss reverse engineering kernels, the con…
  continue reading
 
Bill Thompson is a cognitive scientist and an assistant professor at UC Berkeley. He runs an experimental cognition laboratory where he and his students conduct research on human language and cognition using large-scale behavioral experiments, computational modeling, and machine learning. In this episode, we explore the impact of cultural evolution…
  continue reading
 
Ben Eysenbach is a PhD student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he’s tackled include long horizon reasoning, exploratio…
  continue reading
 
Jim Fan is a research scientist at NVIDIA and got his PhD at Stanford under Fei-Fei Li. Jim is interested in building generally capable autonomous agents, and he recently published MineDojo, a massively multiscale benchmarking suite built on Minecraft, which was an Outstanding Paper at NeurIPS. In this episode, we discuss the foundation models for …
  continue reading
 
Sergey Levine, an assistant professor of EECS at UC Berkeley, is one of the pioneers of modern deep reinforcement learning. His research focuses on developing general-purpose algorithms for autonomous agents to learn how to solve any task. In this episode, we talk about the bottlenecks to generalization in reinforcement learning, why simulation is …
  continue reading
 
Noam Brown is a research scientist at FAIR. During his Ph.D. at CMU, he made the first AI to defeat top humans in No Limit Texas Hold 'Em poker. More recently, he was part of the team that built CICERO which achieved human-level performance in Diplomacy. In this episode, we extensively discuss ideas underlying both projects, the power of spending c…
  continue reading
 
Sugandha Sharma is a Ph.D. candidate at MIT advised by Prof. Ila Fiete and Prof. Josh Tenenbaum. She explores the computational and theoretical principles underlying higher cognition in the brain by constructing neuro-inspired models and mathematical tools to discover how the brain navigates the world, or how to construct memory mechanisms that don…
  continue reading
 
Nicklas Hansen is a Ph.D. student at UC San Diego advised by Prof Xiaolong Wang and Prof Hao Su. He is also a student researcher at Meta AI. Nicklas' research interests involve developing machine learning systems, specifically neural agents, that have the ability to learn, generalize, and adapt over their lifetime. In this episode, we talk about lo…
  continue reading
 
Jack Parker-Holder recently joined DeepMind after his Ph.D. with Stephen Roberts at Oxford. Jack is interested in using reinforcement learning to train generally capable agents, especially via an open-ended learning process where environments can adapt to constantly challenge the agent's capabilities. Before doing his Ph.D., Jack worked for 7 years…
  continue reading
 
Celeste Kidd is a professor of psychology at UC Berkeley. Her lab studies the processes involved in knowledge acquisition; essentially, how we form our beliefs over time and what allows us to select a subset of all the information we encounter in the world to form those beliefs. In this episode, we chat about attention and curiosity, beliefs and ex…
  continue reading
 
Archit Sharma is a Ph.D. student at Stanford advised by Chelsea Finn. His recent work is focused on autonomous deep reinforcement learning—that is, getting real world robots to learn to deal with unseen situations without human interventions. Prior to this, he was an AI resident at Google Brain and he interned with Yoshua Bengio at Mila. In this ep…
  continue reading
 
Chelsea Finn is an Assistant Professor at Stanford and part of the Google Brain team. She's interested in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction at scale. In this episode, we chat about some of the biggest bottlenecks in RL and robotics—including distribution shifts, Sim2Re…
  continue reading
 
Hattie Zhou is a Ph.D. student at Mila working with Hugo Larochelle and Aaron Courville. Her research focuses on understanding how and why neural networks work, starting with deconstructing why lottery tickets work and most recently exploring how forgetting may be fundamental to learning. Prior to Mila, she was a data scientist at Uber and did rese…
  continue reading
 
Minqi Jiang is a Ph.D. student at UCL and FAIR, advised by Tim Rocktäschel and Edward Grefenstette. Minqi is interested in how simulators can enable AI agents to learn useful behaviors that generalize to new settings. He is especially focused on problems at the intersection of generalization, human-AI coordination, and open-ended systems. In this e…
  continue reading
 
Oleh Rybkin is a Ph.D. student at the University of Pennsylvania and a student researcher at Google. He is advised by Kostas Daniilidis and Sergey Levine. Oleh's research focus is on reinforcement learning, particularly unsupervised and model-based RL in the visual domain. In this episode, we discuss agents that explore and plan (and do yoga), how …
  continue reading
 
Andrew Lampinen is a Research Scientist at DeepMind. He previously completed his Ph.D. in cognitive psychology at Stanford. In this episode, we discuss generalization and transfer learning, how to think about language and symbols, what AI can learn from psychology (and vice versa), mental time travel, and the need for more human-like tasks. [Podcas…
  continue reading
 
Martín Arjovsky did his Ph.D. at NYU with Leon Bottou. Some of his well-known works include the Wasserstein GAN and a paradigm called Invariant Risk Minimization. In this episode, we discuss out-of-distribution generalization, geometric information theory, and the importance of good benchmarks.
  continue reading
 
Yash Sharma is a Ph.D. student at the International Max Planck Research School for Intelligent Systems. He previously studied electrical engineering at Cooper Union and has spent time at Borealis AI and IBM Research. Yash’s early work was on adversarial examples and his current research interests span a variety of topics in representation disentang…
  continue reading
 
Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks. **Highlights from our conversation:** 🕸 "Why is sparsity everywhere? This i…
  continue reading
 
Jacob Steinhardt (Google Scholar) (Website) is an assistant professor at UC Berkeley. His main research interest is in designing machine learning systems that are reliable and aligned with human values. Some of his specific research directions include robustness, rewards specification and reward hacking, as well as scalable alignment. Highlights: 📜…
  continue reading
 
Vincent Sitzmann (Google Scholar) (Website) is a postdoc at MIT. His work is on neural scene representations in computer vision. Ultimately, he wants to make representations that AI agents can use to solve the same visual tasks humans solve regularly, but that are currently impossible for AI. **Highlights from our conversation:** 👁 “Vision is about…
  continue reading
 
Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general. This is known as the value alignment problem. Highlights from our conversation…
  continue reading
 
Drew Linsley (Google Scholar) is a Paul J. Salem senior research associate at Brown, advised by Thomas Serre. He is working on building computational models of the visual system that serve the dual purpose of (1) explaining biological function and (2) extending artificial vision. Highlights from our conversation: 🧠 Building neural-inspired inductiv…
  continue reading
 
Giancarlo Kerg (Google Scholar) is a PhD student at Mila, supervised by Yoshua Bengio and Guillaume Lajoie. He is working on out-of-distribution generalization and modularity in memory-augmented neural networks. Highlights from our conversation: 🧮 Pure math foundations as an approach to progress and structural understanding in deep learning researc…
  continue reading
 
Yujia Huang (Website) is a PhD student at Caltech, working at the intersection of deep learning and neuroscience. She worked on optics and biophotonics before venturing into machine learning. Now, she hopes to design “less artificial” artificial intelligence. Highlights from our conversation: 🏗 How recurrent generative feedback, a neuro-inspired de…
  continue reading
 
Julian Chibane (Google Scholar) is a PhD student at the Real Virtual Humans group at the Max Planck Institute for Informatics in Germany. His recent work centers around intrinsic functions for 3D reconstruction. Highlights from our conversation: 🖼 How, surprisingly, the IF-Net architecture learned reasonable representations of humans & objects with…
  continue reading
 
Katja Schwartz came to machine learning from physics, and is now working on 3D geometric scene understanding at the Max Planck Institute for Intelligent Systems. Her most recent work, “Generative Radiance Fields for 3D-Aware Image Synthesis,” revealed that radiance fields are a powerful representation for generative image synthesis, leading to 3D c…
  continue reading
 
Joel Lehman was previously a founding member at Uber AI Labs and assistant professor at the IT University of Copenhagen. He's now a research scientist at OpenAI, where he focuses on open-endedness, reinforcement learning, and AI safety. Joel’s PhD dissertation introduced the novelty search algorithm. That work inspired him to write the popular scie…
  continue reading
 
Cinjon Resnick was formerly from Google Brain and now is doing his PhD at NYU. We talk about why he believes scene understanding is critical to out of distribution generalization, and how his theses have evolved since he started his PhD. Some topics we over: How Cinjon started his research by trying to grow a baby through language and games, before…
  continue reading
 
Sarah Jane Hong is the co-founder of Latent Space, a startup building the first fully AI-rendered 3D engine in order to democratize creativity. We touch on what it was like taking classes under Geoff Hinton in 2013, the trouble with using natural language prompts to render a scene, why a model’s ability to scale is more important than getting state…
  continue reading
 
Loading …
Copyright 2026 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play