Go offline with the Player FM app!
FIR #464: Research Finds Disclosing Use of AI Erodes Trust
Manage episode 482394374 series 1391833
Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted.
Links from this episode:
- The transparency dilemma: How AI disclosure erodes trust
- The ‘Insights 2024: Attitudes toward AI’ Report Reveals Researchers and Clinicians Believe in AI’s Potential but Demand Transparency in Order to Trust Tools (press release)
- Insights 2024: Attitudes toward AI
- Being honest about using AI at work makes people trust you less, research finds
- Should Businesses Disclose Their AI Usage?
- Insights 2024: AI’ Report – Researchers and Clinicians Believe AI’s Potential but Need Transparency
- New research: When disclosing use of AI, be specific
- Demystifying Generative AI Disclosures
- The Janus Face of Artificial Intelligence Feedback: Deployment Versus Disclosure Effects on Employee Performance
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:05)
Hi everybody and welcome to episode number 464 of 4 Immediate Release. I’m Shel Holtz.
@nevillehobson (00:13)
and I’m Neville Hobson. Let’s talk about something that might surprise you in this episode. It turns out that being honest about using AI at work, you know, doing the right thing by being transparent, might actually make people trust you less. That’s the headline finding from a new academic study published in April by Elsevier titled, The Transparency Dilemma, How AI Disclosure Erodes Trust. It’s a heavyweight piece of research.
13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this.
So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just admitting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human.
who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good.
Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting.
Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent about using AI.
It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself.
And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail?
Shel Holtz (03:53)
I see it as a conundrum that we’re going to have to figure out in a hurry because I have seen other research that reinforces this, that we truly are damned if we do and damned if we don’t because disclosing, and this is according to research that was conducted by EPIC, the Electronic Privacy Information Center, it was published late last November. They basically said that if you…
@nevillehobson (03:56)
Yep. ⁓
Shel Holtz (04:18)
disclose that you’re using AI, you are essentially putting the audience on notice that the information could be wrong. It could be because of AI hallucination. It could be inaccurate data that was in the training set. It could be due to the creator or the distributor or the content intentionally trying to mislead the audience. basically it tells the audience, AI, it could be wrong. This could be…
false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job.
So on the one hand, know, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI feedback once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found.
And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced.
less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that
both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI.
And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic.
Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file.
@nevillehobson (07:34)
Thank
Shel Holtz (07:40)
So all kinds of people out there researching this and thinking about it, but in the meantime, it’s a trust issue that I don’t think a lot of people are giving a lot of thought to.
@nevillehobson (07:50)
No, I think you’re probably right. And I think there doesn’t seem to be any very easy solution to this. The article that I first saw that discussed this in detail in the conversation talked a bit about this, which in some detail, but briefly, they talk about what still is not known. And they start with saying that it’s not clear at all whether this penalty
of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy.
And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and support both transparency and credibility. they…
In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps
in this situation where you might get negative feedback on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do?
But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to verify that content and that therefore means you have not just, you
shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it.
And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though.
Shel Holtz (11:10)
Well, one of the things that came out of the epic research is that disclosures are inconsistently applied. And I think that’s one of the issues with leaving it to individuals or to individual organizations to decide how am going to disclose the use of AI and how am going to disclose the use of AI on each individual application, that you’re going to end up with a real hodgepodge of disclosures out there. And that’s not going to…
@nevillehobson (11:15)
Mm-hmm.
Right.
Shel Holtz (11:36)
aid trust, that’s going to have the opposite effect on trust. Epic is actually calling for regulation around disclosure, which is not unsurprising from an organization like Epic. But I want to read you one part of a paragraph from this rather lengthy report that gets into where I think some of the issues exist with disclosure. says, first and foremost, disclosures do not affect bias or correct and accurate information.
@nevillehobson (11:49)
Hmm.
Shel Holtz (12:03)
Merely stating that a piece of content was created using generative AI or manipulated in some way with AI does not counteract the racist, sexist, or otherwise harmful outputs. The disclosure does not necessarily indicate to the viewer that a piece of content may be biased or infringing on copyright, either. Unless stated in the disclosure, the individual would have to be previously aware that these biases, errors, or IP infringements exist.
@nevillehobson (12:18)
.
Shel Holtz (12:30)
and then must meaningfully engage with and investigate the information gleaned from a piece of content to assess veracity. However, the average viewer scrolling on social media will not investigate every picture or news article they see. For that reason, other measures need to be taken to properly reduce the spread of misinformation. And that’s where they get into this notion that this needs to be regulated. There needs to be a way to assure people who are seeing content.
that it is accurate and to disclose where AI was specifically employed in producing that content.
@nevillehobson (13:08)
Yeah, I understand that. Although that doesn’t address the issue that is kind of like underpins our discussion today, which is disclosing you’ve used AI is going to get you a negative hit. But the fact that you did use the AI. So that doesn’t address that. I’m not sure that anything can address that. If you disclose it, you’ll get the reactions that the conversations research shows up or the service research shows up, I should say. If you don’t disclose it, you should and you’ll get found out it will be even worse.
So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt.
Shel Holtz (13:36)
Right.
Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can…
consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity
are what’s required here. And I don’t know how we get to that without regulation.
@nevillehobson (14:50)
No, well, I can see a way that I’m not a fan of regulation of this type until it’s been proven that anything else that’s been attempted doesn’t work at all. And we don’t still see enough of the guidance within organizations to this particular topic. That’s what we need now. Regulation, hey, listen, it’s gonna take years to get regulation in place. So in the meantime, this all may have disappeared, doubtful, frankly, but.
I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI.
That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue.
Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it.
Shel Holtz (16:10)
Yeah, absolutely. Disclose, be specific. And I wonder if somebody out there would be interested in starting an organization sort of like Lawrence Lessig did with Creative Commons. So all you had to do now was go fill out a little form and then get an icon and people will go, that’s disclosure C.
@nevillehobson (16:27)
There’s an idea. There is an idea.
Shel Holtz (16:28)
That’s it.
That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release.
The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.
139 episodes
Manage episode 482394374 series 1391833
Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted.
Links from this episode:
- The transparency dilemma: How AI disclosure erodes trust
- The ‘Insights 2024: Attitudes toward AI’ Report Reveals Researchers and Clinicians Believe in AI’s Potential but Demand Transparency in Order to Trust Tools (press release)
- Insights 2024: Attitudes toward AI
- Being honest about using AI at work makes people trust you less, research finds
- Should Businesses Disclose Their AI Usage?
- Insights 2024: AI’ Report – Researchers and Clinicians Believe AI’s Potential but Need Transparency
- New research: When disclosing use of AI, be specific
- Demystifying Generative AI Disclosures
- The Janus Face of Artificial Intelligence Feedback: Deployment Versus Disclosure Effects on Employee Performance
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:05)
Hi everybody and welcome to episode number 464 of 4 Immediate Release. I’m Shel Holtz.
@nevillehobson (00:13)
and I’m Neville Hobson. Let’s talk about something that might surprise you in this episode. It turns out that being honest about using AI at work, you know, doing the right thing by being transparent, might actually make people trust you less. That’s the headline finding from a new academic study published in April by Elsevier titled, The Transparency Dilemma, How AI Disclosure Erodes Trust. It’s a heavyweight piece of research.
13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this.
So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just admitting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human.
who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good.
Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting.
Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent about using AI.
It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself.
And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail?
Shel Holtz (03:53)
I see it as a conundrum that we’re going to have to figure out in a hurry because I have seen other research that reinforces this, that we truly are damned if we do and damned if we don’t because disclosing, and this is according to research that was conducted by EPIC, the Electronic Privacy Information Center, it was published late last November. They basically said that if you…
@nevillehobson (03:56)
Yep. ⁓
Shel Holtz (04:18)
disclose that you’re using AI, you are essentially putting the audience on notice that the information could be wrong. It could be because of AI hallucination. It could be inaccurate data that was in the training set. It could be due to the creator or the distributor or the content intentionally trying to mislead the audience. basically it tells the audience, AI, it could be wrong. This could be…
false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job.
So on the one hand, know, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI feedback once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found.
And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced.
less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that
both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI.
And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic.
Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file.
@nevillehobson (07:34)
Thank
Shel Holtz (07:40)
So all kinds of people out there researching this and thinking about it, but in the meantime, it’s a trust issue that I don’t think a lot of people are giving a lot of thought to.
@nevillehobson (07:50)
No, I think you’re probably right. And I think there doesn’t seem to be any very easy solution to this. The article that I first saw that discussed this in detail in the conversation talked a bit about this, which in some detail, but briefly, they talk about what still is not known. And they start with saying that it’s not clear at all whether this penalty
of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy.
And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and support both transparency and credibility. they…
In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps
in this situation where you might get negative feedback on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do?
But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to verify that content and that therefore means you have not just, you
shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it.
And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though.
Shel Holtz (11:10)
Well, one of the things that came out of the epic research is that disclosures are inconsistently applied. And I think that’s one of the issues with leaving it to individuals or to individual organizations to decide how am going to disclose the use of AI and how am going to disclose the use of AI on each individual application, that you’re going to end up with a real hodgepodge of disclosures out there. And that’s not going to…
@nevillehobson (11:15)
Mm-hmm.
Right.
Shel Holtz (11:36)
aid trust, that’s going to have the opposite effect on trust. Epic is actually calling for regulation around disclosure, which is not unsurprising from an organization like Epic. But I want to read you one part of a paragraph from this rather lengthy report that gets into where I think some of the issues exist with disclosure. says, first and foremost, disclosures do not affect bias or correct and accurate information.
@nevillehobson (11:49)
Hmm.
Shel Holtz (12:03)
Merely stating that a piece of content was created using generative AI or manipulated in some way with AI does not counteract the racist, sexist, or otherwise harmful outputs. The disclosure does not necessarily indicate to the viewer that a piece of content may be biased or infringing on copyright, either. Unless stated in the disclosure, the individual would have to be previously aware that these biases, errors, or IP infringements exist.
@nevillehobson (12:18)
.
Shel Holtz (12:30)
and then must meaningfully engage with and investigate the information gleaned from a piece of content to assess veracity. However, the average viewer scrolling on social media will not investigate every picture or news article they see. For that reason, other measures need to be taken to properly reduce the spread of misinformation. And that’s where they get into this notion that this needs to be regulated. There needs to be a way to assure people who are seeing content.
that it is accurate and to disclose where AI was specifically employed in producing that content.
@nevillehobson (13:08)
Yeah, I understand that. Although that doesn’t address the issue that is kind of like underpins our discussion today, which is disclosing you’ve used AI is going to get you a negative hit. But the fact that you did use the AI. So that doesn’t address that. I’m not sure that anything can address that. If you disclose it, you’ll get the reactions that the conversations research shows up or the service research shows up, I should say. If you don’t disclose it, you should and you’ll get found out it will be even worse.
So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt.
Shel Holtz (13:36)
Right.
Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can…
consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity
are what’s required here. And I don’t know how we get to that without regulation.
@nevillehobson (14:50)
No, well, I can see a way that I’m not a fan of regulation of this type until it’s been proven that anything else that’s been attempted doesn’t work at all. And we don’t still see enough of the guidance within organizations to this particular topic. That’s what we need now. Regulation, hey, listen, it’s gonna take years to get regulation in place. So in the meantime, this all may have disappeared, doubtful, frankly, but.
I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI.
That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue.
Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it.
Shel Holtz (16:10)
Yeah, absolutely. Disclose, be specific. And I wonder if somebody out there would be interested in starting an organization sort of like Lawrence Lessig did with Creative Commons. So all you had to do now was go fill out a little form and then get an icon and people will go, that’s disclosure C.
@nevillehobson (16:27)
There’s an idea. There is an idea.
Shel Holtz (16:28)
That’s it.
That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release.
The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.
139 episodes
所有剧集
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.