Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Logan Molnar Podcasts

show episodes
 
Artwork

1
No Sleep Sleepover

Logan Molnar & Matt Hribar

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
Remember when you had those sleepovers and you would stay up chatting, gossiping, playing games and doing fun things? It's like that -- in a podcast form! Join Logan Molnar and Matt Hribar as they just have a good old random time in this new podcast by Starvolt Studios.
  continue reading
 
Artwork

1
Judgement Day

with Matt Hribar, Logan Molnar and Raina Beutel

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
We take spicy reddit posts and cast our judgements over situations, stories and experiences. It’s time for judgement! In this podcast mini-series, our panel of hosts analyze posts from the social media app, “Reddit”. Season one will explore posts off Reddit’s “Am I The Asshole” thread where individuals post their situations and ask if they were the asshole in the situation. Season two is a holiday special of "Am I The Asshole" thread topics. For more programming, check out www.starvoltstudio ...
  continue reading
 
Artwork

1
Mystery AI Hype Theater 3000

Emily M. Bender and Alex Hanna

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly+
 
Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
  continue reading
 
Loading …
show series
 
Trump’s “AI Action Plan” is his latest attempt to turn AI hype into official national policy. Kate Brennan and Sarah Myers West, of the AI Now Institute, join us to dig through this pile of deregulatory gifts to Big Tech. Dr. Sarah Myers West is co-executive director of the AI Now Institute, and a former senior advisor on AI for the FTC. Dr. Kate B…
  continue reading
 
After many months of making fun of the term "vibe coding," Emily and Alex tackle the LLMs-as-coders fad head-on, with help from security researcher Susanna Cox. From one person's screed that proclaims everyone not on the vibe-coding bandwagon to be crazy, to the grandiose claim that LLMs could be the "opposable thumb" of the entire world of computi…
  continue reading
 
The chatbot boosters are looking for educators to play brand ambassador for more intrusion of so-called "AI" into the classroom. From the American Federation of Teachers' new partnership with OpenAI and Microsoft for a "national academy for AI instruction" to yet more articles extolling the alleged time-saving and future-proofing virtues of LLM-pow…
  continue reading
 
It's finally here! The AI Con: How to Fight Big Tech's Hype and Create the Future We Want hit the shelves in May. In this special bonus episode, Alex and Emily speak to tech journalist Vauhini Vara at one of the book's online launch events, where they covered the misleading nature of the term "artificial intelligence," why the use of tools like Cha…
  continue reading
 
Because Sam Altman hates opening his laptop, OpenAI is merging with iPhone guy Jony Ive's design firm in the name of some mysterious new ChatGPT-enabled consumer products: Alex and Emily go full Mystery Science Theater and dissect the announcement video. Plus how tech billionaires like Sam Altman mythologize San Francisco while their money makes it…
  continue reading
 
This week, Alex and Emily talk with anthropologist and immigration lawyer Petra Molnar about the dehumanizing hype of border-enforcement tech. From hoovering up data to hunt anyone of ambiguous citizenship status, to running surveillance of physical borders themselves, "AI" tech is everywhere in the enforcement of national borders. And as companies…
  continue reading
 
Emily and Alex pore through an elaborate science fiction scenario about the "inevitability" of Artificial General Intelligence or AGI by the year 2027 - which rests atop a foundation of TESCREAL nonsense, and Sinophobia to boot. References: AI 2027 Fresh AI Hell: AI persona bots for undercover cops Palantir heart eyes Keir Starmer Anti-vaxxers are …
  continue reading
 
It's been 4 months since we've cleared the backlog of Fresh AI Hell and the bullshit is coming in almost too fast to keep up with. But between a page full of awkward unicorns and a seeming slowdown in data center demand, Alex and Emily have more good news than usual to accompany this round of catharsis. AI Hell: LLM processing like human language p…
  continue reading
 
After "AI" stopped meaning anything, the hype salesmen moved on to "AI" "agents", those allegedly indefatigable assistants, allegedly capable of operating your software for you -- whether you need to make a restaurant reservation, book a flight, or book a flight to a restaurant reservation. Hugging Face's Margaret Mitchell joins Emily and Alex to h…
  continue reading
 
Measuring your talk time? Counting your filler words? What about "analyzing" your "emotions"? Companies that push LLM technology to surveil and summarize video meetings are increasingly offering to (purportedly) analyze your participation and assign your speech some metrics, all in the name of "productivity". Sociolinguist Nicole Holliday joins Ale…
  continue reading
 
Emily and Alex read a terrible book so you don't have to! Come for a quick overview of LinkedIn co-founder and venture capitalist Reid Hoffman's opus of magical thinking, 'Superagency: What could possibly go right with our AI future' -- stay for the ridicule as praxis. Plus, why even this tortuous read offers a bit of comfort about the desperate st…
  continue reading
 
In the weeks since January 20, the US information ecosystem has been unraveling fast. (We're looking at you Denali, Gulf of Mexico, and every holiday celebrating people of color and queer people that used to be on Google Calendar.) As the country's unelected South African tech billionaire continues to run previously secure government data through h…
  continue reading
 
Sam Altman thinks fusion - particularly a company he's personally invested in - can provide the energy we "need" to develop AGI. Meanwhile, what if we just...put data centers on the Moon to save energy? Alex, Emily, and guest Tamara Kneese pour cold water on Silicon Valley's various unhinged, technosolutionist ideas about energy and the environment…
  continue reading
 
In January, the United Kingdom's new Labour Party prime minister, Keir Starmer, announced a new initiative to go all in on AI in the hopes of big economic returns, with a promise to “mainline” it into the country’s veins: everything from offering public data to private companies, to potentially fast-tracking miniature nuclear power plants to supply…
  continue reading
 
Not only is OpenAI's new o3 model allegedly breaking records for how close an LLM can get to the mythical "human-like thinking" of AGI, but Sam Altman has some, uh, reflections for us as he marks two years since the official launch of ChatGPT. Emily and Alex kick off the new year unraveling these truly fantastical stories. References: OpenAI o3 Bre…
  continue reading
 
It’s been a long year in the AI hype mines. And no matter how many claims Emily and Alex debunk, there's always a backlog of Fresh AI Hell. This week, another whirlwind attempt to clear it, with plenty of palate cleansers along the way. Fresh AI Hell: Part I: Education Medical residency assignments "AI generated" UCLA course "Could ChatGPT get an e…
  continue reading
 
Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed ve…
  continue reading
 
From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed’ tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported…
  continue reading
 
The company behind ChatGPT is back with bombastic claim that their new o1 model is capable of so-called "complex reasoning." Ever-faithful, Alex and Emily tear it apart. Plus the flaws in a tech publication's new 'AI hype index,' and some palette-cleansing new regulation against data-scraping worker surveillance. References: OpenAI: Learning to rea…
  continue reading
 
Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignor…
  continue reading
 
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?” Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot…
  continue reading
 
Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art. Fresh AI Hell: Synthetic data for Hollywood test screening…
  continue reading
 
Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as p…
  continue reading
 
The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting. References: The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom." Response:…
  continue reading
 
Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button. Referenc…
  continue reading
 
We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the wor…
  continue reading
 
When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might inte…
  continue reading
 
You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in…
  continue reading
 
The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws. References: Driving US Innovation in Arti…
  continue reading
 
Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a featu…
  continue reading
 
AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter. **Lyrics & video on Peertube. *Surveillance:* Public kiosks slurp phone data Workplace surv…
  continue reading
 
Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research. Dr. Molly Crockett is an associate professor of psychology at Princeton University. Dr. Lisa Messeri…
  continue reading
 
Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life. Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, wh…
  continue reading
 
Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications. References: Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI Platform The Quint: AI Invents Quote From Real Person in Article…
  continue reading
 
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data. References: PNAS: ChatGPT outperforms crowd workers for text-annotation tasks Beware th…
  continue reading
 
Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks f…
  continue reading
 
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety. References: Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed Partnership AS…
  continue reading
 
Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math. Visit us on PeerTube for the video of this conversa…
  continue reading
 
New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions. Dr. Eleanor Drag…
  continue reading
 
AI Hell has frozen over for a single hour. Alex and Emily visit all seven circles in a tour of the worst in bite-sized BS. References: Pentagon moving toward letting AI weapons autonomously kill humans NYC Mayor uses AI to make robocalls in languages he doesn’t speak University of Michigan investing in OpenAI Tesla: claims of “full self-driving” ar…
  continue reading
 
Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy. Justin Hen…
  continue reading
 
Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency. This episode was recorded on November 20, 2023. Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends …
  continue reading
 
Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of th…
  continue reading
 
Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may…
  continue reading
 
Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts. References: Noema Magazine: "Artificial General Intelligence Is Already Here." "AI and the Everything in the Whole Wide World Benchmark" "Targetin…
  continue reading
 
Spooky, Scary, CRAZY? It’s time for judgement! In this podcast mini-series hosts analyze posts from the social media app, “Reddit”. Season one will explore posts off Reddit’s “Am I The Asshole” thread where individuals post their situations and ask if they were the asshole in the situation. With Raina Beutel and Logan Molnar Season two focuses excl…
  continue reading
 
Episode Notes It’s time for judgement! In this podcast mini-series hosts analyze posts from the social media app, “Reddit”. Season one will explore posts off Reddit’s “Am I The Asshole” thread where individuals post their situations and ask if they were the asshole in the situation. With Raina Beutel and Logan Molnar Season three focuses exclusivel…
  continue reading
 
Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about t…
  continue reading
 
Alex and Emily are taking another stab at Google and other companies' aspirations to be part of the healthcare system - this time with the expertise of Stanford incoming assistant professor of dermatology and biomedical data science Roxana Daneshjou. A look at the gap between medical licensing examination questions and real life, and the inherently…
  continue reading
 
Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms. Plus a full portion of Fresh Hell...and a little bit of good news. References: White House press release on voluntary commitments …
  continue reading
 
Loading …
Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play