Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Aaron Francis and Try Hard Studios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Francis and Try Hard Studios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Building search for AI systems with Chroma CTO Hammad Bashir

1:06:43
 
Share
 

Manage episode 524914090 series 3579868
Content provided by Aaron Francis and Try Hard Studios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Francis and Try Hard Studios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Hammad Bashir, CTO of Chroma, joins the show to break down how modern vector search systems are actually built from local, embedded databases to massively distributed, object-storage-backed architectures. We dig into Chroma’s shared local-to-cloud API, log-structured storage on object stores, hybrid search, and why retrieval-augmented generation (RAG) isn’t going anywhere.

Follow Hammad:
Twitter/X: https://twitter.com/HammadTime
LinkedIn: https://www.linkedin.com/in/hbashir
Chroma: https://trychroma.com

Follow Aaron:
Twitter/X: https://twitter.com/aarondfrancis
Database School: https://databaseschool.com
Database School YouTube Channel: https://www.youtube.com/@UCT3XN4RtcFhmrWl8tf_o49g (Subscribe today)
LinkedIn: https://www.linkedin.com/in/aarondfrancis
Website: https://aaronfrancis.com - find articles, podcasts, courses, and more.

Chapters:
00:00 – Introduction From high-school ASICs to CTO of Chroma
01:04 – Hammad’s background and why vector search stuck
03:01 – Why Chroma has one API for local and distributed systems
05:37 – Local experimentation vs production AI workflows
08:03 – What “unprincipled data” means in machine learning
10:31 – From computer vision to retrieval for LLMs
13:00 – Exploratory data analysis and why looking at data still matters
16:38 – Promoting data from local to Chroma Cloud
19:26 – Why Chroma is built on object storage
20:27 – Write-ahead logs, batching, and durability
26:56 – Compaction, inverted indexes, and storage layout
29:26 – Strong consistency and reading from the log
34:12 – How queries are routed and executed
37:00 – Hybrid search: vectors, full-text, and metadata
41:03 – Chunking, embeddings, and retrieval boundaries
43:22 – Agentic search and letting models drive retrieval
45:01 – Is RAG dead? A grounded explanation
48:24 – Why context windows don’t replace search
56:20 – Context rot and why retrieval reduces confusion
01:00:19 – Faster models and the future of search stacks
01:02:25 – Who Chroma is for and when it’s a great fit
01:04:25 – Hiring, team culture, and where to follow Chroma

  continue reading

29 episodes

Artwork
iconShare
 
Manage episode 524914090 series 3579868
Content provided by Aaron Francis and Try Hard Studios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Francis and Try Hard Studios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Hammad Bashir, CTO of Chroma, joins the show to break down how modern vector search systems are actually built from local, embedded databases to massively distributed, object-storage-backed architectures. We dig into Chroma’s shared local-to-cloud API, log-structured storage on object stores, hybrid search, and why retrieval-augmented generation (RAG) isn’t going anywhere.

Follow Hammad:
Twitter/X: https://twitter.com/HammadTime
LinkedIn: https://www.linkedin.com/in/hbashir
Chroma: https://trychroma.com

Follow Aaron:
Twitter/X: https://twitter.com/aarondfrancis
Database School: https://databaseschool.com
Database School YouTube Channel: https://www.youtube.com/@UCT3XN4RtcFhmrWl8tf_o49g (Subscribe today)
LinkedIn: https://www.linkedin.com/in/aarondfrancis
Website: https://aaronfrancis.com - find articles, podcasts, courses, and more.

Chapters:
00:00 – Introduction From high-school ASICs to CTO of Chroma
01:04 – Hammad’s background and why vector search stuck
03:01 – Why Chroma has one API for local and distributed systems
05:37 – Local experimentation vs production AI workflows
08:03 – What “unprincipled data” means in machine learning
10:31 – From computer vision to retrieval for LLMs
13:00 – Exploratory data analysis and why looking at data still matters
16:38 – Promoting data from local to Chroma Cloud
19:26 – Why Chroma is built on object storage
20:27 – Write-ahead logs, batching, and durability
26:56 – Compaction, inverted indexes, and storage layout
29:26 – Strong consistency and reading from the log
34:12 – How queries are routed and executed
37:00 – Hybrid search: vectors, full-text, and metadata
41:03 – Chunking, embeddings, and retrieval boundaries
43:22 – Agentic search and letting models drive retrieval
45:01 – Is RAG dead? A grounded explanation
48:24 – Why context windows don’t replace search
56:20 – Context rot and why retrieval reduces confusion
01:00:19 – Faster models and the future of search stacks
01:02:25 – Who Chroma is for and when it’s a great fit
01:04:25 – Hiring, team culture, and where to follow Chroma

  continue reading

29 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play