Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by O'Reilly. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Interactions Between Humans and AI with Rajeshwari Ganesan

33:22
 
Share
 

Manage episode 514760158 series 3696743
Content provided by O'Reilly. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this edition of Generative AI in the Real World, Ben Lorica and Rajeshwari Ganesan talk about how to put generative AI in closer touch with human needs and requirements. AI isn’t all about building bigger models and benchmarks. To use it effectively, we need better interfaces; we need contexts that support groups rather than individuals; we need applications that allow people to explore the space they’re working in. Ever since ChatGPT, we’ve assumed that chat is the best interface for AI. We can do better.

Points of Interest

  • 0:17: We’re both builders and consumers of AI. How does this dual relationship affect how we design interfaces?
  • 0:41: A lot of advances happen in the large language models. But when we step back, are these models consumable by users? We lack the kind of user interface we need. With ChatGPT, conversations can go round and round, turn by turn. If you don’t give the right context, you don’t get the right answer. This isn’t good enough.
  • 1:47: Model providers go out of their way to coach users, telling them how to prompt new models. All the providers have coaching tips. What alternatives should we be exploring?
  • 2:50: We’ve made certain initial starts. GitHub Copilot and mail applications with typeahead don’t require heavy-duty prompting. The AI coinhabits the same workspace as the user. The context is derived from the workspace. The second part is that generative interfaces are emerging. It’s not the content but the experience that’s generated by the machine.
  • 5:22: Interfaces are experience. Generate the interface based on what the user needs at any given point. At Infosys, we do a lot of legacy modernization—that’s where you really need good interfaces. We have been able to create interfaces where the user is able to walk into a latent space—an area that gives them an understanding of what they want to explore.
  • 7:11: A latent space is an area that is meaningful for the user’s interaction. A space that’s relatable and semantically understandable. The user might say, “Tell me all the modules dealing with fraud detection.” Exploring the space that the user wants is possible. Let’s say I describe various aspects of a project I’m launching. The machine looks at my thought process. It looks at my answers, breaks [them] up part by part, judges the quality of response, and gets into the pieces that need to be better.
  • 9:44: One of the things people struggle with is evaluation. Not of a single agent—most tasks require multiple agents because there are different skills and tasks involved. How do we address evaluation and transparency?
  • 10:42: When it comes to evaluation, I think in terms of trustworthy systems. A lot of focus on evaluation comes from model engineering. But one critical piece of building trustworthy systems is the interface itself. A human has an intent and is requesting a response. There is a shared context—and if the context isn’t shared properly, you won’t get the right response. Prompt engineering is difficult; if you don’t give the right context, you go in a loop.
  • 12:26: Trustworthiness breaks because you’re dependent on the prompt. The coinhabited workspace that takes the context from the environment plays a big role.
  • 12:46: Once you give the questions to the machine, the machine gives a response. But if you don’t make a response that is consumable by the user, that’s a problem.
  • 13:18: Trustworthiness of systems in the context of agent frameworks is much more complex. Humans don’t just have factual knowledge. We have beliefs. Humans have a belief state, and if an agent doesn’t have access to the belief state, they will get into something called reasoning derailment. If the interface can’t bring belief states to life, you will have a problem.

  continue reading

33 episodes

Artwork
iconShare
 
Manage episode 514760158 series 3696743
Content provided by O'Reilly. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this edition of Generative AI in the Real World, Ben Lorica and Rajeshwari Ganesan talk about how to put generative AI in closer touch with human needs and requirements. AI isn’t all about building bigger models and benchmarks. To use it effectively, we need better interfaces; we need contexts that support groups rather than individuals; we need applications that allow people to explore the space they’re working in. Ever since ChatGPT, we’ve assumed that chat is the best interface for AI. We can do better.

Points of Interest

  • 0:17: We’re both builders and consumers of AI. How does this dual relationship affect how we design interfaces?
  • 0:41: A lot of advances happen in the large language models. But when we step back, are these models consumable by users? We lack the kind of user interface we need. With ChatGPT, conversations can go round and round, turn by turn. If you don’t give the right context, you don’t get the right answer. This isn’t good enough.
  • 1:47: Model providers go out of their way to coach users, telling them how to prompt new models. All the providers have coaching tips. What alternatives should we be exploring?
  • 2:50: We’ve made certain initial starts. GitHub Copilot and mail applications with typeahead don’t require heavy-duty prompting. The AI coinhabits the same workspace as the user. The context is derived from the workspace. The second part is that generative interfaces are emerging. It’s not the content but the experience that’s generated by the machine.
  • 5:22: Interfaces are experience. Generate the interface based on what the user needs at any given point. At Infosys, we do a lot of legacy modernization—that’s where you really need good interfaces. We have been able to create interfaces where the user is able to walk into a latent space—an area that gives them an understanding of what they want to explore.
  • 7:11: A latent space is an area that is meaningful for the user’s interaction. A space that’s relatable and semantically understandable. The user might say, “Tell me all the modules dealing with fraud detection.” Exploring the space that the user wants is possible. Let’s say I describe various aspects of a project I’m launching. The machine looks at my thought process. It looks at my answers, breaks [them] up part by part, judges the quality of response, and gets into the pieces that need to be better.
  • 9:44: One of the things people struggle with is evaluation. Not of a single agent—most tasks require multiple agents because there are different skills and tasks involved. How do we address evaluation and transparency?
  • 10:42: When it comes to evaluation, I think in terms of trustworthy systems. A lot of focus on evaluation comes from model engineering. But one critical piece of building trustworthy systems is the interface itself. A human has an intent and is requesting a response. There is a shared context—and if the context isn’t shared properly, you won’t get the right response. Prompt engineering is difficult; if you don’t give the right context, you go in a loop.
  • 12:26: Trustworthiness breaks because you’re dependent on the prompt. The coinhabited workspace that takes the context from the environment plays a big role.
  • 12:46: Once you give the questions to the machine, the machine gives a response. But if you don’t make a response that is consumable by the user, that’s a problem.
  • 13:18: Trustworthiness of systems in the context of agent frameworks is much more complex. Humans don’t just have factual knowledge. We have beliefs. Humans have a belief state, and if an agent doesn’t have access to the belief state, they will get into something called reasoning derailment. If the interface can’t bring belief states to life, you will have a problem.

  continue reading

33 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play