Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular ...
…
continue reading
Machine Learning Street Talk Podcasts
The Talking Tuesdays Podcast is all about quantitative topics but mainly focused around quantitative finance, data science, machine learning, career development, and technical topics. Join me for some insight from a risk management professional on how the industry works and how to break in!
…
continue reading
Learn everything you want to know about Infinite Banking, Leveraging Life Insurance, Real Estate, Bitcoin, Bitcoin mining and ways to skyrocket your wealth. Dan has been in the financial planning business for almost 40 years. Starting out as a stockbroker, Dan learned quickly that the Wall-Street methods simply do not work. Join Dan on his journey to finding the most predictable, wealth building investments that Wall Street won't tell you about. Don't miss the Wise Money Tools Podcast--with ...
…
continue reading

1
Superintelligence Strategy (Dan Hendrycks)
1:45:38
1:45:38
Play later
Play later
Lists
Like
Liked
1:45:38Deep dive with Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang. *** SPONSOR MESSAGES Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cli …
…
continue reading
Send us a text Roman Bansal is the founder of NanoConda. We discuss growing up in Russia, the joy of reading books, and how NanoConda can help you set up software, API, hardware, and colocation for HFT (high frequency trading) for smaller firms. We also discuss why Dallas, Texas is growing in the quant space as many firms are locating here. NanoCon…
…
continue reading
Send us a text Tribhuvan Bisen is a co-founder of Quant Insider. We learn about his journey from his education to working at Deutsche and then starting Quant Insider. We also discuss the quant job and education market in India and what it takes to be a quant. Quant Insider: https://quantinsider.io/ https://www.linkedin.com/company/quant-insider/ Tr…
…
continue reading

1
Ep 345 - Turn Cash Value Into a Money Machine
8:02
8:02
Play later
Play later
Lists
Like
Liked
8:02Got cash value in your policy? Most people let it sit there… growing slowly. Here’s the smarter play: 💵 Keep it compounding in the policy 📈 Borrow against it to buy appreciating or cash-flowing assets 🏠 Real estate, private notes, equipment leasing, even Bitcoin mining It’s the ultimate double dip — one dollar working in two places at once. Plus, y…
…
continue reading
Some advisors say you can withdraw 8–10% from retirement funds. The stats say otherwise—only a 3.3% chance it works! We show how to safely take 2–3x more than Wall Street’s 4% rule—tax-free.
…
continue reading

1
DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)
58:22
58:22
Play later
Play later
Lists
Like
Liked
58:22This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing…
…
continue reading
Send us a text Fred Viole is the founder of OVVO Labs and has been putting together a complete statistical framework using partial moments and nonlinear and nonparametric statistics (NNS). He also has an R package which is free called, NNS. The application of NNS to finance is proprietary and what OVVO Labs uses to sell macroeconomic forecasts, a u…
…
continue reading

1
Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)
49:48
49:48
Play later
Play later
Lists
Like
Liked
49:48Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI. He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing le…
…
continue reading
Send us a text Project Phoenix is me re-organizing my life. I got an offer to be a CRO and instead of taking it, I quit my job, sold my honeybees, and decided to run a half marathon. I started my own business called, "Fancy Quant LLC" where I will consult in quant research, risk management, career development, and academic program consulting and ad…
…
continue reading
Have cash value building up? Don’t waste it. The wealthy don’t borrow to buy toys. They borrow to build wealth: 🏘️ Real estate: multifamily, office, student housing 📜 Land notes: 10–12% returns ⛏️ Bitcoin mining equipment 📦 E-commerce & gear leasing 🚀 Even your own business All while your policy keeps compounding tax-free. 👊 Use cash value to creat…
…
continue reading
Still think you can retire pulling 10% a year from your portfolio? Even Dave Ramsey says it’s possible — but math says otherwise: 💸 10% withdrawal = 3.3% chance your money lasts 📉 $1M portfolio gives you $40K/year (4%) 💀 That’s before taxes Wall Street’s model is broken. The wealthy use tax-free strategies that allow 2–3x more cashflow — with less …
…
continue reading

1
Mathematician and Quant - Raphael Douady
1:11:09
1:11:09
Play later
Play later
Lists
Like
Liked
1:11:09Send us a text Raphael Douady is a French mathematician who works in both academia as well as quantitative finance. His specialization is in chaos theory and financial mathematics. In this interview he shares how he got into mathematics and why he left for quantitative finance. We also briefly discuss AI in the finance space as he has a paid semina…
…
continue reading

1
Pushing compute to the limits of physics
1:23:32
1:23:32
Play later
Play later
Lists
Like
Liked
1:23:32Dr. Maxwell Ramstead grills Guillaume Verdon (AKA “Beff Jezos”) who's the founder of Thermodynamic computing startup Extropic. Guillaume shares his unique path – from dreaming about space travel as a kid to becoming a physicist, then working on quantum computing at Google, to developing a radically new form of computing hardware for machine learnin…
…
continue reading
Send us a text I sit down with Data Bento's CEO, Christina Qi to discuss how she started Data Bento and why their product of providing data is the best. It turns out there are a lot of features that firms want such as how data is structured, cleaned, and transferred which make a big difference especially in the finance and investing space. Learn mo…
…
continue reading
Send us a text Welcome to Season 8 of Talking Tuesdays with Fancy Quant! This season I will bring in more guest speakers from the quantitative finance community to talk about data, business, math, stats, and their journey's through life. OVVO Labs is a proud sponsor of Talking Tuesday with Fancy Quant! www.OVVOLabs.com Support the show…
…
continue reading

1
The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)
2:16:22
2:16:22
Play later
Play later
Lists
Like
Liked
2:16:22Are the AI models you use today imposters? Please watch the intro video we did before this: https://www.youtube.com/watch?v=o1q6Hhz0MAg In this episode, hosts Dr. Tim Scarfe and Dr. Duggar are joined by AI researcher Prof. Kenneth Stanley and MIT PhD student Akash Kumar to discuss their fascinating paper, "Questioning Representational Optimism in D…
…
continue reading

1
The Fractured Entangled Representation Hypothesis (Intro)
15:45
15:45
Play later
Play later
Lists
Like
Liked
15:45What if today's incredible AI is just a brilliant "impostor"? This episode features host Dr. Tim Scarfe in conversation with guests Prof. Kenneth Stanley (ex-OpenAI), Dr. Keith Duggar (MIT), and Arkash Kumar (MIT).While AI today produces amazing results on the surface, its internal understanding is a complete mess, described as "total spaghetti" [0…
…
continue reading

1
Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)
2:07:07
2:07:07
Play later
Play later
Lists
Like
Liked
2:07:07What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotaj…
…
continue reading

1
How AI Learned to Talk and What It Means - Prof. Christopher Summerfield
1:08:28
1:08:28
Play later
Play later
Lists
Like
Liked
1:08:28We interview Professor Christopher Summerfield from Oxford University about his new book "These Strange New Minds: How AI Learned to Talk and What It". AI learned to understand the world just by reading text - something scientists thought was impossible. You don't need to see a cat to know what one is; you can learn everything from words alone. Thi…
…
continue reading

1
Ep 341 - STOP Buying Bitcoin the Wrong Way!
16:40
16:40
Play later
Play later
Lists
Like
Liked
16:40
…
continue reading

1
Ep 340 - Missed $100 BTC? Don’t Miss $100K!
13:21
13:21
Play later
Play later
Lists
Like
Liked
13:21
…
continue reading

1
Ep 339 - The IRS Doesn’t Want You to Hear This — Ed Lyon Reveals Legal Tax Hacks
44:12
44:12
Play later
Play later
Lists
Like
Liked
44:12
…
continue reading

1
"Blurring Reality" - Chai's Social AI Platform (SPONSORED)
50:59
50:59
Play later
Play later
Lists
Like
Liked
50:59"Blurring Reality" - Chai's Social AI Platform - sponsored This episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai dis…
…
continue reading

1
Ep 338 - Wall Street Pays $100K. You Could Pay $14K!
9:09
9:09
Play later
Play later
Lists
Like
Liked
9:09
…
continue reading

1
Ep 337 - The Wealth Playbook the Banks & IRS Hope You Never See!
11:23
11:23
Play later
Play later
Lists
Like
Liked
11:23
…
continue reading
Send us a text This is the presentation I gave at the Quaint Quant Conference 2025. The goal of the conference is to bring together more people to share ideas and collaborate. I touch on some of the main groups of people in the quantitative finance community and what each group can do to build a better community. The four groups are outsiders, acad…
…
continue reading

1
Google AlphaEvolve - Discovering new science (exclusive interview)
1:13:58
1:13:58
Play later
Play later
Lists
Like
Liked
1:13:58Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work. AlphaEvolve: A Gemini-powered coding agent for designing ad…
…
continue reading

1
Ep 336 - Wall Street Can’t Talk About Bitcoin… But I Can!
8:58
8:58
Play later
Play later
Lists
Like
Liked
8:58
…
continue reading

1
Prof. Randall Balestriero - LLMs without pretraining and SSL
34:30
34:30
Play later
Play later
Lists
Like
Liked
34:30Randall Balestriero joins the show to discuss some counterintuitive findings in AI. He shares research showing that huge language models, even when started from scratch (randomly initialized) without massive pre-training, can learn specific tasks like sentiment analysis surprisingly well, train stably, and avoid severe overfitting, sometimes matchi…
…
continue reading

1
How Machines Learn to Ignore the Noise (Kevin Ellis + Zenna Tavares)
1:16:55
1:16:55
Play later
Play later
Lists
Like
Liked
1:16:55Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data. They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is mo…
…
continue reading

1
Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!
1:36:28
1:36:28
Play later
Play later
Lists
Like
Liked
1:36:28Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts h…
…
continue reading

1
The Compendium - Connor Leahy and Gabriel Alfour
1:37:10
1:37:10
Play later
Play later
Lists
Like
Liked
1:37:10Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI…
…
continue reading

1
Ep 335 - The Next Trillion-Dollar Market!
17:50
17:50
Play later
Play later
Lists
Like
Liked
17:50
…
continue reading

1
ARC Prize v2 Launch! (Francois Chollet and Mike Knoop)
54:15
54:15
Play later
Play later
Lists
Like
Liked
54:15We are joined by Francois Chollet and Mike Knoop, to launch the new version of the ARC prize! In version 2, the challenges have been calibrated with humans such that at least 2 humans could solve each task in a reasonable task, but also adversarially selected so that frontier reasoning models can't solve them. The best LLMs today get negligible per…
…
continue reading

1
Test-Time Adaptation: the key to reasoning with DL (Mohamed Osman)
1:03:36
1:03:36
Play later
Play later
Lists
Like
Liked
1:03:36Mohamed Osman joins to discuss MindsAI's highest scoring entry to the ARC challenge 2024 and the paradigm of test-time fine-tuning. They explore how the team, now part of Tufa Labs in Zurich, achieved state-of-the-art results using a combination of pre-training techniques, a unique meta-learning strategy, and an ensemble voting mechanism. Mohamed e…
…
continue reading

1
GSMSymbolic paper - Iman Mirzadeh (Apple)
1:11:23
1:11:23
Play later
Play later
Lists
Like
Liked
1:11:23Iman Mirzadeh from Apple, who recently published the GSM-Symbolic paper discusses the crucial distinction between intelligence and achievement in AI systems. He critiques current AI research methodologies, highlighting the limitations of Large Language Models (LLMs) in reasoning and knowledge representation. SPONSOR MESSAGES: *** Tufa AI Labs is a …
…
continue reading

1
Reasoning, Robustness, and Human Feedback in AI - Max Bartolo (Cohere)
1:23:11
1:23:11
Play later
Play later
Lists
Like
Liked
1:23:11Dr. Max Bartolo from Cohere discusses machine learning model development, evaluation, and robustness. Key topics include model reasoning, the DynaBench platform for dynamic benchmarking, data-centric AI development, model training challenges, and the limitations of human feedback mechanisms. The conversation also covers technical aspects like influ…
…
continue reading

1
Tau Language: The Software Synthesis Future (sponsored)
1:41:19
1:41:19
Play later
Play later
Lists
Like
Liked
1:41:19This sponsored episode features mathematician Ohad Asor discussing logical approaches to AI, focusing on the limitations of machine learning and introducing the Tau language for software development and blockchain tech. Asor argues that machine learning cannot guarantee correctness. Tau allows logical specification of software requirements, automat…
…
continue reading

1
John Palazza - Vice President of Global Sales @ CentML ( sponsored)
54:50
54:50
Play later
Play later
Lists
Like
Liked
54:50John Palazza from CentML joins us in this sponsored interview to discuss the critical importance of infrastructure optimization in the age of Large Language Models and Generative AI. We explore how enterprises can transition from the innovation phase to production and scale, highlighting the significance of efficient GPU utilization and cost manage…
…
continue reading

1
Transformers Need Glasses! - Federico Barbero
1:00:54
1:00:54
Play later
Play later
Lists
Like
Liked
1:00:54Federico Barbero (DeepMind/Oxford) is the lead author of "Transformers Need Glasses!". Have you ever wondered why LLMs struggle with seemingly simple tasks like counting or copying long strings of text? We break down the theoretical reasons behind these failures, revealing architectural bottlenecks and the challenges of maintaining information fide…
…
continue reading

1
Ep 334 - The Future of Tokenization & Digital Assets Explained: Sovereign Wealth Fund Institute Event
8:23
8:23
Play later
Play later
Lists
Like
Liked
8:23
…
continue reading

1
Ep 333 - How Much Can You Make Mining Bitcoin?: Bitcoin Technical Analysis
6:41
6:41
Play later
Play later
Lists
Like
Liked
6:41
…
continue reading

1
Sakana AI - Chris Lu, Robert Tjarko Lange, Cong Lu
1:37:54
1:37:54
Play later
Play later
Lists
Like
Liked
1:37:54We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems. The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author…
…
continue reading

1
Ep 332 - Bitcoin Price Prediction: BlackRock Bitcoin ETF Explained!
8:50
8:50
Play later
Play later
Lists
Like
Liked
8:50
…
continue reading

1
Clement Bonnet - Can Latent Program Networks Solve Abstract Reasoning?
51:26
51:26
Play later
Play later
Lists
Like
Liked
51:26Clement Bonnet discusses his novel approach to the ARC (Abstraction and Reasoning Corpus) challenge. Unlike approaches that rely on fine-tuning LLMs or generating samples at inference time, Clement's method encodes input-output pairs into a latent space, optimizes this representation with a search algorithm, and decodes outputs for new inputs. This…
…
continue reading

1
Prof. Jakob Foerster - ImageNet Moment for Reinforcement Learning?
53:31
53:31
Play later
Play later
Lists
Like
Liked
53:31Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He…
…
continue reading

1
Ep 331 - Using Leverage To Build Wealth: Best Way To Build Generational Wealth
11:05
11:05
Play later
Play later
Lists
Like
Liked
11:05
…
continue reading

1
Daniel Franzen & Jan Disselhoff - ARC Prize 2024 winners
1:09:04
1:09:04
Play later
Play later
Lists
Like
Liked
1:09:04Daniel Franzen and Jan Disselhoff, the "ARChitects" are the official winners of the ARC Prize 2024. Filmed at Tufa Labs in Zurich - they revealed how they achieved a remarkable 53.5% accuracy by creatively utilising large language models (LLMs) in new ways. Discover their innovative techniques, including depth-first search for token selection, test…
…
continue reading

1
Sepp Hochreiter - LSTM: The Comeback Story?
1:07:01
1:07:01
Play later
Play later
Lists
Like
Liked
1:07:01Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective…
…
continue reading

1
Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero
1:18:10
1:18:10
Play later
Play later
Lists
Like
Liked
1:18:10Professor Randall Balestriero joins us to discuss neural network geometry, spline theory, and emerging phenomena in deep learning, based on research presented at ICML. Topics include the delayed emergence of adversarial robustness in neural networks ("grokking"), geometric interpretations of neural networks via spline theory, and challenges in reco…
…
continue reading