Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

Machine Learning Street Talk Podcasts

show episodes
 
Artwork

1
Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Weekly
 
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular ...
  continue reading
 
The Talking Tuesdays Podcast is all about quantitative topics but mainly focused around quantitative finance, data science, machine learning, career development, and technical topics. Join me for some insight from a risk management professional on how the industry works and how to break in!
  continue reading
 
Artwork

1
Wise Money Tools

Dan Thompson

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
Learn everything you want to know about Infinite Banking, Leveraging Life Insurance, Real Estate, Bitcoin, Bitcoin mining and ways to skyrocket your wealth. Dan has been in the financial planning business for almost 40 years. Starting out as a stockbroker, Dan learned quickly that the Wall-Street methods simply do not work. Join Dan on his journey to finding the most predictable, wealth building investments that Wall Street won't tell you about. Don't miss the Wise Money Tools Podcast--with ...
  continue reading
 
Loading …
show series
 
Deep dive with Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang. *** SPONSOR MESSAGES Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cli …
  continue reading
 
Send us a text Roman Bansal is the founder of NanoConda. We discuss growing up in Russia, the joy of reading books, and how NanoConda can help you set up software, API, hardware, and colocation for HFT (high frequency trading) for smaller firms. We also discuss why Dallas, Texas is growing in the quant space as many firms are locating here. NanoCon…
  continue reading
 
Send us a text Tribhuvan Bisen is a co-founder of Quant Insider. We learn about his journey from his education to working at Deutsche and then starting Quant Insider. We also discuss the quant job and education market in India and what it takes to be a quant. Quant Insider: https://quantinsider.io/ https://www.linkedin.com/company/quant-insider/ Tr…
  continue reading
 
Got cash value in your policy? Most people let it sit there… growing slowly. Here’s the smarter play: 💵 Keep it compounding in the policy 📈 Borrow against it to buy appreciating or cash-flowing assets 🏠 Real estate, private notes, equipment leasing, even Bitcoin mining It’s the ultimate double dip — one dollar working in two places at once. Plus, y…
  continue reading
 
This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing…
  continue reading
 
Send us a text Fred Viole is the founder of OVVO Labs and has been putting together a complete statistical framework using partial moments and nonlinear and nonparametric statistics (NNS). He also has an R package which is free called, NNS. The application of NNS to finance is proprietary and what OVVO Labs uses to sell macroeconomic forecasts, a u…
  continue reading
 
Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI. He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing le…
  continue reading
 
Send us a text Project Phoenix is me re-organizing my life. I got an offer to be a CRO and instead of taking it, I quit my job, sold my honeybees, and decided to run a half marathon. I started my own business called, "Fancy Quant LLC" where I will consult in quant research, risk management, career development, and academic program consulting and ad…
  continue reading
 
Have cash value building up? Don’t waste it. The wealthy don’t borrow to buy toys. They borrow to build wealth: 🏘️ Real estate: multifamily, office, student housing 📜 Land notes: 10–12% returns ⛏️ Bitcoin mining equipment 📦 E-commerce & gear leasing 🚀 Even your own business All while your policy keeps compounding tax-free. 👊 Use cash value to creat…
  continue reading
 
Still think you can retire pulling 10% a year from your portfolio? Even Dave Ramsey says it’s possible — but math says otherwise: 💸 10% withdrawal = 3.3% chance your money lasts 📉 $1M portfolio gives you $40K/year (4%) 💀 That’s before taxes Wall Street’s model is broken. The wealthy use tax-free strategies that allow 2–3x more cashflow — with less …
  continue reading
 
Send us a text Raphael Douady is a French mathematician who works in both academia as well as quantitative finance. His specialization is in chaos theory and financial mathematics. In this interview he shares how he got into mathematics and why he left for quantitative finance. We also briefly discuss AI in the finance space as he has a paid semina…
  continue reading
 
Dr. Maxwell Ramstead grills Guillaume Verdon (AKA “Beff Jezos”) who's the founder of Thermodynamic computing startup Extropic. Guillaume shares his unique path – from dreaming about space travel as a kid to becoming a physicist, then working on quantum computing at Google, to developing a radically new form of computing hardware for machine learnin…
  continue reading
 
Send us a text I sit down with Data Bento's CEO, Christina Qi to discuss how she started Data Bento and why their product of providing data is the best. It turns out there are a lot of features that firms want such as how data is structured, cleaned, and transferred which make a big difference especially in the finance and investing space. Learn mo…
  continue reading
 
Send us a text Welcome to Season 8 of Talking Tuesdays with Fancy Quant! This season I will bring in more guest speakers from the quantitative finance community to talk about data, business, math, stats, and their journey's through life. OVVO Labs is a proud sponsor of Talking Tuesday with Fancy Quant! www.OVVOLabs.com Support the show…
  continue reading
 
Are the AI models you use today imposters? Please watch the intro video we did before this: https://www.youtube.com/watch?v=o1q6Hhz0MAg In this episode, hosts Dr. Tim Scarfe and Dr. Duggar are joined by AI researcher Prof. Kenneth Stanley and MIT PhD student Akash Kumar to discuss their fascinating paper, "Questioning Representational Optimism in D…
  continue reading
 
What if today's incredible AI is just a brilliant "impostor"? This episode features host Dr. Tim Scarfe in conversation with guests Prof. Kenneth Stanley (ex-OpenAI), Dr. Keith Duggar (MIT), and Arkash Kumar (MIT).While AI today produces amazing results on the surface, its internal understanding is a complete mess, described as "total spaghetti" [0…
  continue reading
 
What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotaj…
  continue reading
 
We interview Professor Christopher Summerfield from Oxford University about his new book "These Strange New Minds: How AI Learned to Talk and What It". AI learned to understand the world just by reading text - something scientists thought was impossible. You don't need to see a cat to know what one is; you can learn everything from words alone. Thi…
  continue reading
 
"Blurring Reality" - Chai's Social AI Platform - sponsored This episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai dis…
  continue reading
 
Send us a text This is the presentation I gave at the Quaint Quant Conference 2025. The goal of the conference is to bring together more people to share ideas and collaborate. I touch on some of the main groups of people in the quantitative finance community and what each group can do to build a better community. The four groups are outsiders, acad…
  continue reading
 
Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work. AlphaEvolve: A Gemini-powered coding agent for designing ad…
  continue reading
 
Randall Balestriero joins the show to discuss some counterintuitive findings in AI. He shares research showing that huge language models, even when started from scratch (randomly initialized) without massive pre-training, can learn specific tasks like sentiment analysis surprisingly well, train stably, and avoid severe overfitting, sometimes matchi…
  continue reading
 
Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data. They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is mo…
  continue reading
 
Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts h…
  continue reading
 
Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI…
  continue reading
 
We are joined by Francois Chollet and Mike Knoop, to launch the new version of the ARC prize! In version 2, the challenges have been calibrated with humans such that at least 2 humans could solve each task in a reasonable task, but also adversarially selected so that frontier reasoning models can't solve them. The best LLMs today get negligible per…
  continue reading
 
Mohamed Osman joins to discuss MindsAI's highest scoring entry to the ARC challenge 2024 and the paradigm of test-time fine-tuning. They explore how the team, now part of Tufa Labs in Zurich, achieved state-of-the-art results using a combination of pre-training techniques, a unique meta-learning strategy, and an ensemble voting mechanism. Mohamed e…
  continue reading
 
Iman Mirzadeh from Apple, who recently published the GSM-Symbolic paper discusses the crucial distinction between intelligence and achievement in AI systems. He critiques current AI research methodologies, highlighting the limitations of Large Language Models (LLMs) in reasoning and knowledge representation. SPONSOR MESSAGES: *** Tufa AI Labs is a …
  continue reading
 
Dr. Max Bartolo from Cohere discusses machine learning model development, evaluation, and robustness. Key topics include model reasoning, the DynaBench platform for dynamic benchmarking, data-centric AI development, model training challenges, and the limitations of human feedback mechanisms. The conversation also covers technical aspects like influ…
  continue reading
 
This sponsored episode features mathematician Ohad Asor discussing logical approaches to AI, focusing on the limitations of machine learning and introducing the Tau language for software development and blockchain tech. Asor argues that machine learning cannot guarantee correctness. Tau allows logical specification of software requirements, automat…
  continue reading
 
John Palazza from CentML joins us in this sponsored interview to discuss the critical importance of infrastructure optimization in the age of Large Language Models and Generative AI. We explore how enterprises can transition from the innovation phase to production and scale, highlighting the significance of efficient GPU utilization and cost manage…
  continue reading
 
Federico Barbero (DeepMind/Oxford) is the lead author of "Transformers Need Glasses!". Have you ever wondered why LLMs struggle with seemingly simple tasks like counting or copying long strings of text? We break down the theoretical reasons behind these failures, revealing architectural bottlenecks and the challenges of maintaining information fide…
  continue reading
 
We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems. The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author…
  continue reading
 
Clement Bonnet discusses his novel approach to the ARC (Abstraction and Reasoning Corpus) challenge. Unlike approaches that rely on fine-tuning LLMs or generating samples at inference time, Clement's method encodes input-output pairs into a latent space, optimizes this representation with a search algorithm, and decodes outputs for new inputs. This…
  continue reading
 
Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He…
  continue reading
 
Daniel Franzen and Jan Disselhoff, the "ARChitects" are the official winners of the ARC Prize 2024. Filmed at Tufa Labs in Zurich - they revealed how they achieved a remarkable 53.5% accuracy by creatively utilising large language models (LLMs) in new ways. Discover their innovative techniques, including depth-first search for token selection, test…
  continue reading
 
Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective…
  continue reading
 
Professor Randall Balestriero joins us to discuss neural network geometry, spline theory, and emerging phenomena in deep learning, based on research presented at ICML. Topics include the delayed emergence of adversarial robustness in neural networks ("grokking"), geometric interpretations of neural networks via spline theory, and challenges in reco…
  continue reading
 
Loading …
Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play