Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Valentino Stoll, Joe Leo, Valentino Stoll, and Joe Leo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Valentino Stoll, Joe Leo, Valentino Stoll, and Joe Leo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Contracts and Code: The Realities of AI Development

47:51
 
Share
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on December 02, 2025 17:38 (9d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 508029386 series 3642718
Content provided by Valentino Stoll, Joe Leo, Valentino Stoll, and Joe Leo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Valentino Stoll, Joe Leo, Valentino Stoll, and Joe Leo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode, Valentino Stoll and Joe Leo unpack the widening gap between headline-grabbing AI salaries and the day-to-day realities of building sustainable AI products. From sports-style contracts stuffed with equity to the true cost of running large models, they explore why incremental gains often matter more than hype. The conversation dives into the messy art of benchmarking LLMs, the fresh evaluation tools emerging in the Ruby ecosystem, and new OpenAI features that change how prompts, tools, and reasoning tokens are handled. Along the way, they weigh the business math of switching models, debate standardisation versus playful experimentation in Ruby, and highlight frameworks like RubyLLM, Phoenix, and Leva that are reshaping how developers ship AI features.

Takeaways

  • The importance of marketing oneself in the tech industry.
  • Disparity in AI salaries reflects market demand and hype.
  • AI contracts often include equity, complicating true value assessment.
  • The AI race lacks clear winners, with incremental improvements across models.
  • User experience often outweighs model efficacy in AI products.
  • Prompt engineering is crucial for optimizing model performance.
  • Benchmarking AI models is complex and requires tailored evaluation sets.
  • Existing tools for AI evaluation are often insufficient for specific needs.
  • Cost analysis is critical when choosing AI models for business.
  • Incremental improvements in AI models may not meet user expectations. You can constrain tool outputs to specific grammars for flexibility.
  • Asking models to think out loud can enhance tool calls.
  • Reasoning tokens can be reused in subsequent AI calls.
  • Evaluating AI frameworks is crucial for business decisions.
  • Ruby's integration in AI is becoming more prominent.
  • The AI landscape is rapidly evolving, requiring adaptability.
  • Hype cycles can mislead developers about tool longevity.
  • Ruby offers a unique user experience for developers.
  • Tinkering with code fosters creativity and innovation.
  • The playful nature of Ruby can lead to unexpected insights.

  continue reading

13 episodes

Artwork
iconShare
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on December 02, 2025 17:38 (9d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 508029386 series 3642718
Content provided by Valentino Stoll, Joe Leo, Valentino Stoll, and Joe Leo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Valentino Stoll, Joe Leo, Valentino Stoll, and Joe Leo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode, Valentino Stoll and Joe Leo unpack the widening gap between headline-grabbing AI salaries and the day-to-day realities of building sustainable AI products. From sports-style contracts stuffed with equity to the true cost of running large models, they explore why incremental gains often matter more than hype. The conversation dives into the messy art of benchmarking LLMs, the fresh evaluation tools emerging in the Ruby ecosystem, and new OpenAI features that change how prompts, tools, and reasoning tokens are handled. Along the way, they weigh the business math of switching models, debate standardisation versus playful experimentation in Ruby, and highlight frameworks like RubyLLM, Phoenix, and Leva that are reshaping how developers ship AI features.

Takeaways

  • The importance of marketing oneself in the tech industry.
  • Disparity in AI salaries reflects market demand and hype.
  • AI contracts often include equity, complicating true value assessment.
  • The AI race lacks clear winners, with incremental improvements across models.
  • User experience often outweighs model efficacy in AI products.
  • Prompt engineering is crucial for optimizing model performance.
  • Benchmarking AI models is complex and requires tailored evaluation sets.
  • Existing tools for AI evaluation are often insufficient for specific needs.
  • Cost analysis is critical when choosing AI models for business.
  • Incremental improvements in AI models may not meet user expectations. You can constrain tool outputs to specific grammars for flexibility.
  • Asking models to think out loud can enhance tool calls.
  • Reasoning tokens can be reused in subsequent AI calls.
  • Evaluating AI frameworks is crucial for business decisions.
  • Ruby's integration in AI is becoming more prominent.
  • The AI landscape is rapidly evolving, requiring adaptability.
  • Hype cycles can mislead developers about tool longevity.
  • Ruby offers a unique user experience for developers.
  • Tinkering with code fosters creativity and innovation.
  • The playful nature of Ruby can lead to unexpected insights.

  continue reading

13 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play