The award-winning WIRED UK Podcast with James Temperton and the rest of the team. Listen every week for the an informed and entertaining rundown of latest technology, science, business and culture news. New episodes every Friday.
…
continue reading
Content provided by Joe Colantonio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Joe Colantonio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
AI Testing Made Trustworthy using FizzBee
MP3•Episode home
Manage episode 517332498 series 3561402
Content provided by Joe Colantonio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Joe Colantonio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
As AI tools like Copilot, Claude, and Cursor start writing more of our code, the biggest challenge isn't generating software — it's trusting it. In this episode, JP (Jayaprabhakar) Kadarkarai, founder of FizzBee, joins Joe Colantonio to explore how autonomous, model-based testing can validate AI-generated software automatically and help teams ship with confidence. FizzBee uses a unique approach that connects design, code, and behavior into one continuous feedback loop — automatically testing for concurrency issues and validating that your implementation matches your intent. You'll discover: Why AI-generated code can't be trusted without validation How model-based testing works and why it's crucial for AI-driven development The difference between example-based and property-based testing How FizzBee detects concurrency bugs without intrusive tracing Why autonomous testing is becoming mandatory for the AI era Whether you're a software tester, DevOps engineer, or automation architect, this conversation will change how you think about testing in the age of AI-generated code.
…
continue reading
570 episodes
MP3•Episode home
Manage episode 517332498 series 3561402
Content provided by Joe Colantonio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Joe Colantonio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
As AI tools like Copilot, Claude, and Cursor start writing more of our code, the biggest challenge isn't generating software — it's trusting it. In this episode, JP (Jayaprabhakar) Kadarkarai, founder of FizzBee, joins Joe Colantonio to explore how autonomous, model-based testing can validate AI-generated software automatically and help teams ship with confidence. FizzBee uses a unique approach that connects design, code, and behavior into one continuous feedback loop — automatically testing for concurrency issues and validating that your implementation matches your intent. You'll discover: Why AI-generated code can't be trusted without validation How model-based testing works and why it's crucial for AI-driven development The difference between example-based and property-based testing How FizzBee detects concurrency bugs without intrusive tracing Why autonomous testing is becoming mandatory for the AI era Whether you're a software tester, DevOps engineer, or automation architect, this conversation will change how you think about testing in the age of AI-generated code.
…
continue reading
570 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.