Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo

DeepSeek's Cost-Efficient Model Training ($5M vs hundreds of millions for competitors)

24:42
 
Share
 

Manage episode 467952582 series 3451197
Content provided by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

The episode features hosts Chris Detzel and Michael Burke discussing DeepSeek, a Chinese AI company making waves in the large language model (LLM) space. Here are the key discussion points:

Major Breakthrough in Cost Efficiency:
- DeepSeek claimed they trained their latest model for only $5 million, compared to hundreds of millions or billions spent by competitors like OpenAI
- This cost efficiency created market disruption, particularly affecting NVIDIA's stock as it challenged assumptions about necessary GPU resources

Mixture of Experts (MoE) Innovation:
- Instead of using one large model, DeepSeek uses multiple specialized "expert" models
- Each expert model focuses on specific areas/topics
- Uses reinforcement learning to route queries to the appropriate expert model
- This approach reduces both training and inference costs
- DeepSeek notably open-sourced their MoE architecture, unlike other major companies

Technical Infrastructure:
- Discussion of how DeepSeek achieved results without access to NVIDIA's latest GPUs
- Highlighted the dramatic price increase in NVIDIA GPUs (from $3,000 to $30,000-$50,000) due to AI demand
- Explained how inference costs (serving the model) often exceed training costs

Chain of Thought Reasoning:
- DeepSeek open-sourced their chain of thought reasoning system
- This allows models to break down complex questions into steps before answering
- Improves accuracy on complicated queries, especially math problems
- Comparable to Meta's LLAMA in terms of open-source contributions to the field

Broader Industry Impact:
- Discussion of how businesses are integrating AI into their products
- Example of ZoomInfo using AI to aggregate business intelligence and automate sales communications
- Noted how technical barriers to AI implementation are lowering through platforms like Databricks

The hosts also touched on data privacy concerns regarding Chinese tech companies entering the US market, drawing parallels to TikTok discussions. They concluded by discussing how AI tools are making technical development more accessible to non-experts and mentioned the importance of being aware of how much personal information these models collect about users.

  continue reading

52 episodes

Artwork
iconShare
 
Manage episode 467952582 series 3451197
Content provided by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

The episode features hosts Chris Detzel and Michael Burke discussing DeepSeek, a Chinese AI company making waves in the large language model (LLM) space. Here are the key discussion points:

Major Breakthrough in Cost Efficiency:
- DeepSeek claimed they trained their latest model for only $5 million, compared to hundreds of millions or billions spent by competitors like OpenAI
- This cost efficiency created market disruption, particularly affecting NVIDIA's stock as it challenged assumptions about necessary GPU resources

Mixture of Experts (MoE) Innovation:
- Instead of using one large model, DeepSeek uses multiple specialized "expert" models
- Each expert model focuses on specific areas/topics
- Uses reinforcement learning to route queries to the appropriate expert model
- This approach reduces both training and inference costs
- DeepSeek notably open-sourced their MoE architecture, unlike other major companies

Technical Infrastructure:
- Discussion of how DeepSeek achieved results without access to NVIDIA's latest GPUs
- Highlighted the dramatic price increase in NVIDIA GPUs (from $3,000 to $30,000-$50,000) due to AI demand
- Explained how inference costs (serving the model) often exceed training costs

Chain of Thought Reasoning:
- DeepSeek open-sourced their chain of thought reasoning system
- This allows models to break down complex questions into steps before answering
- Improves accuracy on complicated queries, especially math problems
- Comparable to Meta's LLAMA in terms of open-source contributions to the field

Broader Industry Impact:
- Discussion of how businesses are integrating AI into their products
- Example of ZoomInfo using AI to aggregate business intelligence and automate sales communications
- Noted how technical barriers to AI implementation are lowering through platforms like Databricks

The hosts also touched on data privacy concerns regarding Chinese tech companies entering the US market, drawing parallels to TikTok discussions. They concluded by discussing how AI tools are making technical development more accessible to non-experts and mentioned the importance of being aware of how much personal information these models collect about users.

  continue reading

52 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play