Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Using LLMs to Evaluate Code

1:02:10
 
Share
 

Manage episode 509954461 series 1264075
Content provided by Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episodes

Artwork
iconShare
 
Manage episode 509954461 series 1264075
Content provided by Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play