Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Neville Hobson and Shel Holtz, Neville Hobson, and Shel Holtz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neville Hobson and Shel Holtz, Neville Hobson, and Shel Holtz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

FIR #480: Reflections on AI, Ethics, and the Role of Communicators

39:51
 
Share
 

Manage episode 505305044 series 3447469
Content provided by Neville Hobson and Shel Holtz, Neville Hobson, and Shel Holtz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neville Hobson and Shel Holtz, Neville Hobson, and Shel Holtz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this reflective follow-up to our FIR Interview in July with Monsignor Paul Tighe of the Vatican, Neville and guest co-host Silvia Cambié revisit some of the key themes that resonated deeply from that conversation.

With a particular focus on the wisdom of the heart – a phrase coined by the Vatican to contrast with the logic of machines – Neville and Silvia explore the ethical responsibilities communicators face in the age of artificial intelligence.

The discussion ranges from the dignity of work and the overlooked realities of outsourced labour, to the limitations of technical expertise when values and human well-being are at stake.

Silvia expands on her Strategic article focusing on precarious workers, while Neville revisits ideas shared on his blog about the Church’s unique role in advocating for inclusive, human-centred dialogue around AI.

Above all, this episode highlights how communicators are uniquely positioned to help organisations navigate the moral and societal questions AI presents – and why they must bring emotional intelligence, narrative skill, and ethical awareness to the forefront of this global conversation.

Topics Covered

  • The idea of wisdom of the heart vs logic of the machine
  • Redefining human intelligence in the AI era
  • The Vatican’s call for a global, inclusive debate
  • Dignity of work and the reality of outsourced labour
  • What ethical AI really means – beyond compliance
  • Why communicators must be part of the AI conversation

Links from this episode:

The next monthly, long-form episode of FIR will drop on Monday, September 29.

We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

Special thanks to Jay Moonah for the opening and closing music.

You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).

Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.


Transcript (from video, edited for clarity):

@nevillehobson (00:03)
Hello everyone and welcome to episode 480 of For Immediate Release. I’m Neville Hobson in the UK. Shel’s away on holiday, but I’m delighted to be joined by Silvia Cambié as guest co-host for this episode. Welcome Silvia.

Silvia Cambie (00:17)
Thank you Neville, delighted to be here today.

@nevillehobson (00:21)
Excellent. Glad you said that. So in this short form episode, we’re going to spend time on an interview we did in late July that you, Shel and I did for an FIR interviews episode. We interviewed Monsignor Paul Tighe from the Vatican. He played a central role in shaping the church’s thinking on artificial intelligence and its broader societal impact.

He was instrumental in the development of Antiqua et Nova, the Vatican’s note on the relationship between artificial intelligence and human intelligence published in January 2025. In our interview, Monsignor Tighe offered a powerful reflection on how AI challenges us not only technically, but also morally and spiritually. He urged us to consider what makes us human in an age of machines, calling for a global conversation grounded in dignity.

agency and what the Vatican calls the wisdom of the heart. So in this episode, Silvia and I want to share what resonated most for us from that conversation and why we believe communicators have a vital role to play in shaping this future. I mentioned this before during the interview, Silvia, that you were instrumental in securing that interview. So tell our listeners, how did it come about?

Silvia Cambie (01:38)
Yes, indeed Neville. So you and I were talking in the spring when Pope Leo XIV was elected and we were talking about his background in math and science. And so on top of that, the Vatican has been contributing their views to a lot of papers like Antiqua et Nova to the Minerva Dialogue, which is a forum that basically collects views about the human

the interaction between humans and AI and the dignity of work. So we were thinking of bringing these voices to the forefront and in particular in relation to your listeners, your listeners and Shel’s listeners who work in comms and work in change management and are often confronted with the moral aspects, values.

the ethical part of governance. And at the moment, they’re looking for a North Star because we are at the forefront as communicators of this wave of AI introductions, AI pilots. But we, at the same time, we often lack guidance. So that’s why we wanted to collect these views from the Vatican, from Monsignor Tighe.

relate them to our work, make them very concrete and kind of actionable, something for our listeners to use, to be able to use. And so I think you were mentioning before what resonated with us, with you and me. And so we had our conversation with Paul Tighe back at the end of July. And then we

@nevillehobson (03:10)
Get it.

Silvia Cambie (03:25)
also listened to a lot of the podcasting podcast interviews he’s done. And we’ve read, you know, articles he’s written and so on. And I think something that resonated ⁓ a lot with me is really the the fact that he believes that technology is never neutral. It’s always the product of a mentality of a culture. And technology is often.

@nevillehobson (03:36)
Hmm.

Silvia Cambie (03:52)
created, produced, programmed by people who, you know, focus on profit, focus on ⁓ productivity. And at the moment, there is a sort of a new trend because of AI that people have to adapt to the demands and pace of machines. A lot of people have to have deadlines these days set by algorithms.

And that is creating a certain dynamic which I have witnessed many times when I work in managed services, which is you’re basically following the rhythm of a machine and you have no time to think, you have no time to develop new ideas, to stop and ponder and get insight out of what you’re doing, out of what your client needs. So at the end of the day, everybody loses.

⁓ You don’t have fresh ideas. The client doesn’t get a fresh view or, you know, fresh recommendations on how to do things. So it’s all very mechanistic and it’s a real risk out there that people, you know, will have to follow the rhythm of machines in the work. And therefore what Paul Tighe mentioned.

which is the wisdom of the heart, as you mentioned before, the ability to relate to other people, how you relate to your client and solve their problems. So I think I’ve seen that in my work and I think that is a real risk. And we have to be aware of that. And as communicators and change managers, there’s a lot we can do because we are on the front lines.

And yeah, so I think this point about technology, that technology is never neutral and that ⁓ there’s this risk and danger that we will have to follow the pace of machines and lose the wisdom of the heart and lose the ability to draw insight from what we do. That’s something that really resonated with me.

@nevillehobson (06:01)
Yeah, I understand that. It’s similar. I was also thinking that one element that did nudge us together to do this was what we had observed, what we’d read and seen even in the prior months during spring and summer. In fact, really since Pope Leo was elected to the papacy and understanding his background in science and mathematics.

But also what struck me that I noticed was his knowledge, his ability, as it were, to understand the role of social media in communicating with people. And we noticed that the Vatican was pretty proactive on many social channels. And indeed, Paul Tighe was at one point ⁓ in the Dicastery of Communication, kind of like the Communications Department, in charge of all of this.

So they have a track record, a history, if you will, of knowing how to use social channels to engage with wide audiences, not just the faithful of the church, but broadly a wider audience. And that’s something we observed. And then this document, Antiqua et Nova, I found it an extraordinary document of what it set forth, what it described.

and the focus in particular on that phrase, wisdom of the heart, that resonated very strongly with me. I found it interesting too that the conversations that the Vatican had been having, notably with Paul Tighe, but not only ⁓ Monsignor Tighe, others too, with leaders of Silicon Valley companies, of the big tech firms in Silicon Valley, Mark Zuckerberg of Meta

We’ve had, you know, we’ve seen more Google, Microsoft, others gathering in a number of instances over the past year or so, ever since Pope Francis’s time to talk about this, where the Vatican was able to introduce this theme, this broader theme of the wisdom of the heart. And it struck me too, that Paul Tighe was quite clear and mentioned the Vatican is not claiming

expertise in AI systems or algorithm design, which by the way, struck me too. We keep talking about the machines. It’s not machines, it’s algorithms we should be worried about. Instead, it offers something that the tech industry and many governments sorely lack. I agree with this completely, a deep concern for long-term consequences you nudged on that point, Silvia, just now, and the consistent voice on the value of human dignity, agency and solidarity. So the wisdom of the heart,

Silvia Cambie (08:09)
Thank

@nevillehobson (08:29)
is a phrase that appears in Antiqua et Nova as part of its final reflections. And it says this: “We must not lose sight of the wisdom of the heart, which reminds us that each person is a unique being of infinite value, and that the future must be shaped with and for people.” And that’s a pretty straightforward message. It’s simple. Perhaps it could be even simpler, actually, but I’ve seen others alluding to this idea of this is about people, not just the tech recently.

So for instance, subsequent to the interview, and this was actually quite recently, about a week or so back, Mustafa Suleyman, the CEO of Microsoft AI, wrote in an essay that we must build AI for people, not to be a person. In other words, AI is not a person. We hear a lot about, and I’ve had conversations with people about this, these so-called personas, the way in which you can create something that’s a duplicate of you almost. It’s like a version of you as a person.

I think that’s crazy to do that, to be frank, because that reinforces everything that we don’t want reinforcing, if you will. But Suleyman makes the case that the real risk we face is not AI suddenly waking up with consciousness, as some people talk about, but people being convinced that it has, because it’s not sentient. That’s a firm belief that I have. These are electronic devices and tools, not actual

versions of people. We’re not there yet. That’s quite a way away, I would say, if ever. But Suleyman goes on to say, this is the interesting bit to me, I want to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. We won’t always get it right. But this humanist frame provides us with a clear North Star to keep working towards. I mean, that couldn’t be simpler either, could it? And this is the head.

of a division of one of the biggest technology companies on the planet, Microsoft, saying that. And I’ve seen others in the industry saying similar things recently. So maybe this is beginning to get attention. And I can’t say, of course, that it’s a direct result only what the Vatican has been talking about, but that surely must be having an influence. So I summarized all this just for me. Quite simply, emotional, moral and ethical intelligence must guide communicators response to AI.

The big question is how,

Silvia Cambie (10:50)
Yes, indeed. And I also liked very much the article by Mustafa Suleyman because I think it’s, as you said, he pointed out the real danger, not that the machines are going to wake up and, you know, kind of take over the world and pretend they’re conscious. It’s more that they are that people.

@nevillehobson (11:06)
Take over the world.

Silvia Cambie (11:14)
will get used to interacting with them and will expect really kind of seek that human aspect in the machines and will also kind of seek approval from AI, from algorithms. That’s also something that Suleyman is cautioning us against. also, so that can create psychosis, stress, anxiety,

people being disenfranchised at work. And I think that there is a quote by UNESCO that I really like, and I have used it in the article I wrote for Strategic, the online platform for communicators. It says that ⁓ AI is about anthropological disruption, right? It’s not only how the…

@nevillehobson (11:48)
Hmm.

Silvia Cambie (12:04)
machines, the algorithms function, it’s how humans react to it. And to answer your question about what communicators can do, because indeed we are at the forefronts, we talk to people, we hear about their needs, about their anxieties and worries at work. So I think there are lot of attempts at the moment to

⁓ wrap some governance around AI, AI applications, rollouts. And what I’ve seen is, know, centers of excellence being created in companies. Those centers of excellence oversee AI pilots, for instance, the progress. So, and have, you know, the usual suspects sitting on them, which are, you know, people from IT, developers.

But I think it’s very important that communicators and change managers become part of those fora because communicators know how to talk to people. Again, they’ve been doing that for forever. That’s their bread and butter. Also, they can relate to previous tech rollouts, you know, like a workplace technology and how people had reacted to that. So there is all that.

institutional knowledge that is needed now because this shift is so unprecedented. So I kind of cringe every time people show me a COE and AI COE made up only of IT people and developers because that’s not the way to go about it. I really like a start,

statue mentioned a fact you mentioned you and Shel mentioned in a previous FIR episode that I think you were quoting ⁓ studies by MIT and HR dive which says that people are expected to use AI in their daily work but they are not receiving proper training so they are kind of very confused about you know what

you know, this is going to hit my performance indicators. What am I supposed to do? They’re not training me. How am I going to use Co-Pilot? I’m going to download Chat GPT and do my own thing and show that I’m doing something and that hopefully will be enough for my company. So all that needs to be structured. And again, communicators have the knowledge. They have the institutional memory. They have the means and also

they know where the different voices sit in a company. Like when we do research before rolling out AI, we create workshops, we’re representatives from different parts of the company. So in that case, communicators know how to spot those voices because we have worked on on rollout projects before.

@nevillehobson (15:03)
Hmm.

Silvia Cambie (15:07)
how people react, where pockets of resistance might be found in the enterprise. So I think that it is paramount that we allow communicators and change managers to participate in those bodies that are being created for AI governance. And obviously that’s also a way

to kind of channel what you were saying Neville, you know, the human aspect, what makes us human. It’s the ability to relate to other people. is insight, it is emotional intelligence. And it’s all things that are really needed these days because of this shift. And it’s kind of, you know, a paradox that we are so focused on the technology now, but at the same time,

we would need to focus even more on the human aspect because this challenge is so huge that people are just not prepared for it. And we really need to focus on the human aspect, their abilities, what makes us human in order to enable people to deal with it, right? In order to enable the training, in order to make people feel that they are equipped

@nevillehobson (16:24)
Thank

Silvia Cambie (16:26)
sufficiently equipped for it. So I also would like to, there is a quote by Pope Francis that I really like, late Pope Francis. He said, this is not an era of change. This is a change of an era. And I firmly believe in that, you know, everything we’ve been saying Neville in, also in our interview with Paul Tighe leads to that.

But I think that in order, so because this change is so huge, we really have to empower people in an unprecedented way and communicators are very well positioned for that.

@nevillehobson (17:06)
Absolutely agree with that. I think you mentioned training indeed, Shel and I discussed that topic in that recent episode of FIR where people feel they’re not getting training on the one hand and on the other hand, there are companies that just aren’t provided because they don’t think it’s worthwhile. There are others though that are doing it quite well. So it’s very, very patchy. It’s not universal. But I think

The role of the communicator then is to develop and deliver that but there’s also another aspect which to me is the it touches directly on on what we’re currently discussing, i.e. the human or the humanity element, if you will, that, you know, organizations are looking at adopting AI to improve their efficiency to ⁓ improve their productivity and they will enter scale more without looking at

this aspect? Are they asking the human questions? And that’s the role, in my opinion, of the communicator well placed to do that. So three questions I wrote down that could be where communicators are able to introduce this element in their conversation. Does this technology help deepen trust and empathy? Or does it risk eroding them? That’s a valid question, in my view.

Are we building systems that reintroduce conscience, care and context into conversations, or are we defaulting only to efficiency and output? And the third one, are we ensuring that AI strengthens our connection to each other rather than replacing those with illusions? And I think there are undoubtedly at least a dozen more, but to me, those are great ones to start with that, in a sense, force attention on this rather than just those technically

valid, yet ⁓ stale approaches to all of this. It dehumanizes, if you like. And I think the, just briefly going back to Mustafa Suleyman’s North Star, as he references quite clearly, and the Vatican’s wisdom of the heart, there’s an essential reminder in all of that, to me, which is quite simple to grasp.

To stay human in the age of AI is to place empathy, dignity and care at the heart of design and use, not simply efficiency or the ways algorithms shape our actions. Suleyman directly references that in his essay. And he’s the first technology leader I’ve seen publicly doing that the way he did. It was very clear and is a long essay, by the way, very long, worth reading. So that’s encouraging to see that and it’s worth.

Silvia Cambie (19:07)
Hmm.

Mm-hmm.

@nevillehobson (19:30)
I think communicators looking for a kind of a something to hang a hook on. This is it in my view for communicators to do that. that, and I think quite clearly is how to address the question we’ve been asking ourselves. How can communicators help democratize the conversation side of organizations? This is one way to go about it, I think.

Silvia Cambie (19:51)
Yeah, indeed. You know, conscience, care and context. Those are very important aspects when you are rolling out rolling out AI and dealing with people’s reactions. And I think those questions you asked are very powerful and they are a good start for communicators to kind of make people think, right? This isn’t just about the tech.

This isn’t just about the efficiencies this app is going to create. You’re still dealing with people. Your employees are people, your clients are people, your regulators are people. So you’re still dealing with them. And I think that it’s about empowering people to ask the right questions, right?

So I was referencing before to those COEs that are being created to monitor AI pilots in companies. Well, the conversations there tend to be very technical and always focused on the tech, know, the rollout, the different waves of the rollout. And I think, again, communicators can bring back the human aspect. How are people reacting to this?

Is this making them more happier in their work or is this making them more insecure? As you were saying before, lot of companies are not providing training or not providing the right training. So that makes them insecure. Are they getting more and more confused in the way they are ⁓ dealing with their clients and customers? Because if you have AI that takes

over part of that relationship, what is left for them to do? So there is, it’s a very complex scenario that has to be considered from different aspects. And again, you you mentioned the Suleyman’s North Star and the Wisdom of the Heart ⁓ mentioned by Monsignor Tighe. I think these are kind of, sort of, these,

thoughts can inspire communicators, can inspire people who work in AI governance and make them pause and think that it’s important to focus on these aspects, to focus on what Suleyman was saying, the fact that people might expect AI ⁓ to be, might think that AI is conscious and they might.

establish, you know, develop a relationship of a certain kind with it so that they end up depending from the algorithm, depending from, you know, expecting approval. And so I think that now is the time to stop and think and introduce those thoughts into the conversations that are going on in companies about AI. And I think

@nevillehobson (22:33)
Hmm.

Silvia Cambie (22:51)
It’s kind of, we’ve got to be brave. We have to do it. I know that often, as I was saying before, know, technology and technical aspects are basically overriding other aspects just because of the pace of ⁓ the project, just because of the pressures that people are under. But I think it’s very important to introduce these thoughts.

into the conversation. And again, you know, it’s a moving target, right? We will continue to look for voices like Monsignor Tighe, like Mustafa Suleyman I’m sure as we progress, there will be others and there will be other aspects. But I think that it’s just this…

@nevillehobson (23:17)
Mm.

Silvia Cambie (23:37)
human aspect and the interaction between ⁓ humans and AI seen from the point of view of humans that is important. And we have collected a voice from the faith community. ⁓ We have looked at the paper that Suleyman published, which is really very thought provoking. so, and we will be looking for our voices going forward.

@nevillehobson (23:52)
Mm-hmm.

Silvia Cambie (24:04)
But I think for communicators, it’s very important to continue to be open to these voices, right? It’s also a way for us to get backup and support when we need to shift the conversation in a company towards the user, towards the rights of the user, towards the dignity of the user and not just

about the technologies. then, you know, these are all tools that we can use to make our point and to make our point ⁓ stick. So I think, as I said, this is a lot of, know, the target, this is a moving target. We’ll have to do a lot more work on this, but it is fascinating for communicators because

You know, I get off often, I get asked by people in comms. So, you know, how do I shift to tech and I am not a developer and I don’t have the right knowledge in AI and I don’t know how to build an LLM. I, my answer is, is basically this, right? Make sure, bring the human voice into the conversation. do you know how to talk to the base?

you know how to collect their voices, or you know how to collect their views, make sure that they are heard, because it is important as we go forward. So I think that is what communicators can do. And that is a very important role indeed at the moment.

@nevillehobson (25:37)
Yeah, I agree with you. Now, that’s very good, Silvia. And I think just to add one final thing to this, in a sense of parallel development that is very much a part of all of this is what I’m calling the end of AI universalism, where currently we’ve got ⁓ an environment, if you will, and an assumption, let’s say that that one or two global platforms will serve the world. And we’re talking about the

tech tools, the chatbots, the means by which people connect with others and discover things themselves. And it’s Silicon Valley based and tends to be in English more than the other language. But we’re seeing some interesting things happening. Latin America. Peru is leading the charge on building a Spanish language chatbot that serves communities throughout Latin America, taking into account

cultural nuances, language differences, and the values that are unique to those communities in that part of the world that are very much not global North style environments, if you like. We’ve got what the Vatican is doing that we’ve just been discussing from that interview with Monsignor Tighe calling for an AI shaped by human dignity. And then just literally yesterday,

this this this a few days ago this past week. Saudi Arabia is asserting its cultural sovereignty, let’s put it that way in digital form, with the launch of an Arabic language chat bot called HUMAIN Chat. And it’s based on a large language model that’s Arabic, the largest in the Arabic speaking world, the developer says. And that’s intended to be

targeted at people of the Islamic faith globally, that’s two billion or so, Arabic speakers throughout the world, 400 million or so of them. At the moment, it’s just in Saudi Arabia. I’ve seen quite a bit of buzz building up about this over the past week, mostly focused on the tech, because it is quite new. The point I’m making though, is that with HUMAIN Chat and the others,

These are signs that the future of AI will not be written in only one language or framed by just one set of values. And that’s something I think we should all be paying very close attention to. And it enables, I think, some, it broadens out, if you will, the part of bringing the human element into the conversation, where you’ve got tools that can be a great help in that goal, in bringing that human part into it. So

These I find very interesting, Silvia, the are we looking at fragmentation of, of AI universalism could be enrichment, I see it. So that’s part of the picture, too. So the human element is essential to all of this. And these are all parts of the jigsaw that’s that’s rapidly being being completed, if you will.

Silvia Cambie (28:21)
Indeed.

@nevillehobson (28:32)
Are we seeing the potential risk of creating parallel AI worlds where cultural and political divides are reinforced, not bridged? That’s a risk in my view. But the humans can prevent that happening, I would say, assuming everyone’s on the same page. And as people were talking about, that is not the case, right? So it’s an interesting time. I think it’s a fascinating time to be a communicator with all this going on. Because as you pointed out, Silvia, earlier, there are

communicators who are fearful of this, who don’t know what to do about this because not getting trained. I would say that this is easy to say this and maybe not easy to actually do for some people. But grasp the nettle, as it were, as the saying goes, get to know these technology tools and how you can, in a sense, leverage them to take your humanist message to others in your organization, particularly the leadership, to bring that human voice into the conversation about deploying AI.

Silvia Cambie (29:11)
Yeah.

@nevillehobson (29:27)
and helping people understand what’s really, really, really important is to bring that human voice into it. It’s not just about efficiency, ⁓ you know, and all that stuff and speed of doing it all. It’s also about what people believe what’s in it for people, what’s the value to individuals in understanding and accepting their role and their values in something like this that’s happening. So it gives us all food for thought, right?

Silvia Cambie (29:35)
profitability.

Yeah.

Yeah, indeed. I really, I was really very happy to see those developments. And you shared with me an article a few days back that the UN, ⁓ something coming out of the UN, UN was saying that, you know, AI is too Western centric, it’s too focused on the global north and the global south is, you know, bound to suffer from that.

@nevillehobson (30:14)
Mmm.

Silvia Cambie (30:18)
Well, you and I have lived in different countries, lived in, you know, have worked in different countries and we know how important culture and diversity of points of view is. And that I think it’s very healthy to have new approaches to AI that are going to challenge the main narrative, i.e. the Silicon Valley narrative. Also, because sometimes, you know, the

messages we get out of Silicon Valley are kind of the big brother, we have the, you know, we are reaching a, a GI, no, we’re not reaching it. Well, we’ve reached it. I don’t know. So it’s kind of, you know, very strange and, and very sort of big brother. I’m going to tell you when I, when I think it’s right, but you know, at the moment I’m not telling you the truth. So I think those,

points of focus of those alternative approaches. You mentioned Latin America, you mentioned the Kingdom of Saudi Arabia. I think it’s very interesting because in that way there will be kind of competition, quote unquote, to Silicon Valley. There will also be more transparency, but also there will be an awareness of the fact.

that as Monsignor Tighe said, technology is never neutral. Technology reflects the mentality of those who create and develop it. So we want diversity of cultures, diversity of points of view. We have to make sure that we collect different voices.

and we channel those into the development and the creation of AI and AI applications. So I think this is a very exciting development, particularly the one from Saudi Arabia. I worked a while ago with Saudi.

developers and communicators on social media and social media campaigns. And they’re very, very creative and they were the forefront of social media. So I am expecting something really interesting and sophisticated from Saudi Arabia. so, yeah, so this is a very good development all in all. And also it’s a good development for communicators, right? Because communicators, a lot of our

colleagues are involved in cross-cultural communication. They work for multinationals, they have to spot those voices and bring them to the forefront. So this will be inspirational for them, right? They will be able to tell their bosses, their board, look, it’s not just this AI application that comes out of Silicon Valley that you can use. There is an AI application in Peru.

@nevillehobson (32:44)
Yes, absolutely.

Silvia Cambie (33:08)
that has a lot of users and is very efficient and effective. And why not using that for our operations in Latin America? So that gives us tools and ammunition to challenge the narrative.

@nevillehobson (33:26)
Yeah, absolutely. No, that’s very true. This has been a great conversation, Silvia. And I think the wrap up, as it were, the extension from the interview, just sharing these additional thoughts, hopefully our listeners will find that complimentary to having listened to the interview. And listeners, have, haven’t you, right? Haven’t you? You have listened to it. If not, ⁓ yeah, yeah, if you haven’t, there’ll be a link to the episode show notes in the show notes for this episode.

Silvia Cambie (33:44)
Yeah, I have, I have, absolutely.

@nevillehobson (33:54)
And indeed, much of what we discussed in this episode, there’ll be links to some of those topics in the show notes as well. So let me conclude by saying Silvia, it’s been a pleasure having you as guest co-host on this episode. So thank you very much for joining in.

Silvia Cambie (34:08)
Thank you for having me, Neville.

@nevillehobson (34:12)
So this episode is, like I said, the link to the interview with Monsignor Paul Tighe will be in the show notes. If you have any comments you’d like to share about what Silvia and I have talked about, then please do. You could do that through the usual channels that we mentioned. But particularly, you could send us a voicemail. There’s a way to do that on the FIR website.

You can send us email fircomments at gmail.com. Increasingly, we’re noticing we’re getting comments quite significantly on LinkedIn. People aren’t actually sending us directly comments anymore. That seems to have fallen out of favor. But conversations build on LinkedIn. FIR doesn’t have its own page on LinkedIn. So you’ll find those comments typically on posts from Shel or I under our own names. But increasingly, others are posting about what they heard in FIR.

So you’ll find lots there. On other social channels, we have a community page on Facebook. And we’re also we have a handle for FIR on Bluesky. And then there’s our individual ones too.

So thanks, everyone, for listening. And if Shel were here, he’d wrap this up by saying, that’ll be a 30 for this episode of For Immediate Release.

The post FIR #480: Reflections on AI, Ethics, and the Role of Communicators appeared first on FIR Podcast Network.

  continue reading

50 episodes

Artwork
iconShare
 
Manage episode 505305044 series 3447469
Content provided by Neville Hobson and Shel Holtz, Neville Hobson, and Shel Holtz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neville Hobson and Shel Holtz, Neville Hobson, and Shel Holtz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this reflective follow-up to our FIR Interview in July with Monsignor Paul Tighe of the Vatican, Neville and guest co-host Silvia Cambié revisit some of the key themes that resonated deeply from that conversation.

With a particular focus on the wisdom of the heart – a phrase coined by the Vatican to contrast with the logic of machines – Neville and Silvia explore the ethical responsibilities communicators face in the age of artificial intelligence.

The discussion ranges from the dignity of work and the overlooked realities of outsourced labour, to the limitations of technical expertise when values and human well-being are at stake.

Silvia expands on her Strategic article focusing on precarious workers, while Neville revisits ideas shared on his blog about the Church’s unique role in advocating for inclusive, human-centred dialogue around AI.

Above all, this episode highlights how communicators are uniquely positioned to help organisations navigate the moral and societal questions AI presents – and why they must bring emotional intelligence, narrative skill, and ethical awareness to the forefront of this global conversation.

Topics Covered

  • The idea of wisdom of the heart vs logic of the machine
  • Redefining human intelligence in the AI era
  • The Vatican’s call for a global, inclusive debate
  • Dignity of work and the reality of outsourced labour
  • What ethical AI really means – beyond compliance
  • Why communicators must be part of the AI conversation

Links from this episode:

The next monthly, long-form episode of FIR will drop on Monday, September 29.

We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

Special thanks to Jay Moonah for the opening and closing music.

You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).

Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.


Transcript (from video, edited for clarity):

@nevillehobson (00:03)
Hello everyone and welcome to episode 480 of For Immediate Release. I’m Neville Hobson in the UK. Shel’s away on holiday, but I’m delighted to be joined by Silvia Cambié as guest co-host for this episode. Welcome Silvia.

Silvia Cambie (00:17)
Thank you Neville, delighted to be here today.

@nevillehobson (00:21)
Excellent. Glad you said that. So in this short form episode, we’re going to spend time on an interview we did in late July that you, Shel and I did for an FIR interviews episode. We interviewed Monsignor Paul Tighe from the Vatican. He played a central role in shaping the church’s thinking on artificial intelligence and its broader societal impact.

He was instrumental in the development of Antiqua et Nova, the Vatican’s note on the relationship between artificial intelligence and human intelligence published in January 2025. In our interview, Monsignor Tighe offered a powerful reflection on how AI challenges us not only technically, but also morally and spiritually. He urged us to consider what makes us human in an age of machines, calling for a global conversation grounded in dignity.

agency and what the Vatican calls the wisdom of the heart. So in this episode, Silvia and I want to share what resonated most for us from that conversation and why we believe communicators have a vital role to play in shaping this future. I mentioned this before during the interview, Silvia, that you were instrumental in securing that interview. So tell our listeners, how did it come about?

Silvia Cambie (01:38)
Yes, indeed Neville. So you and I were talking in the spring when Pope Leo XIV was elected and we were talking about his background in math and science. And so on top of that, the Vatican has been contributing their views to a lot of papers like Antiqua et Nova to the Minerva Dialogue, which is a forum that basically collects views about the human

the interaction between humans and AI and the dignity of work. So we were thinking of bringing these voices to the forefront and in particular in relation to your listeners, your listeners and Shel’s listeners who work in comms and work in change management and are often confronted with the moral aspects, values.

the ethical part of governance. And at the moment, they’re looking for a North Star because we are at the forefront as communicators of this wave of AI introductions, AI pilots. But we, at the same time, we often lack guidance. So that’s why we wanted to collect these views from the Vatican, from Monsignor Tighe.

relate them to our work, make them very concrete and kind of actionable, something for our listeners to use, to be able to use. And so I think you were mentioning before what resonated with us, with you and me. And so we had our conversation with Paul Tighe back at the end of July. And then we

@nevillehobson (03:10)
Get it.

Silvia Cambie (03:25)
also listened to a lot of the podcasting podcast interviews he’s done. And we’ve read, you know, articles he’s written and so on. And I think something that resonated ⁓ a lot with me is really the the fact that he believes that technology is never neutral. It’s always the product of a mentality of a culture. And technology is often.

@nevillehobson (03:36)
Hmm.

Silvia Cambie (03:52)
created, produced, programmed by people who, you know, focus on profit, focus on ⁓ productivity. And at the moment, there is a sort of a new trend because of AI that people have to adapt to the demands and pace of machines. A lot of people have to have deadlines these days set by algorithms.

And that is creating a certain dynamic which I have witnessed many times when I work in managed services, which is you’re basically following the rhythm of a machine and you have no time to think, you have no time to develop new ideas, to stop and ponder and get insight out of what you’re doing, out of what your client needs. So at the end of the day, everybody loses.

⁓ You don’t have fresh ideas. The client doesn’t get a fresh view or, you know, fresh recommendations on how to do things. So it’s all very mechanistic and it’s a real risk out there that people, you know, will have to follow the rhythm of machines in the work. And therefore what Paul Tighe mentioned.

which is the wisdom of the heart, as you mentioned before, the ability to relate to other people, how you relate to your client and solve their problems. So I think I’ve seen that in my work and I think that is a real risk. And we have to be aware of that. And as communicators and change managers, there’s a lot we can do because we are on the front lines.

And yeah, so I think this point about technology, that technology is never neutral and that ⁓ there’s this risk and danger that we will have to follow the pace of machines and lose the wisdom of the heart and lose the ability to draw insight from what we do. That’s something that really resonated with me.

@nevillehobson (06:01)
Yeah, I understand that. It’s similar. I was also thinking that one element that did nudge us together to do this was what we had observed, what we’d read and seen even in the prior months during spring and summer. In fact, really since Pope Leo was elected to the papacy and understanding his background in science and mathematics.

But also what struck me that I noticed was his knowledge, his ability, as it were, to understand the role of social media in communicating with people. And we noticed that the Vatican was pretty proactive on many social channels. And indeed, Paul Tighe was at one point ⁓ in the Dicastery of Communication, kind of like the Communications Department, in charge of all of this.

So they have a track record, a history, if you will, of knowing how to use social channels to engage with wide audiences, not just the faithful of the church, but broadly a wider audience. And that’s something we observed. And then this document, Antiqua et Nova, I found it an extraordinary document of what it set forth, what it described.

and the focus in particular on that phrase, wisdom of the heart, that resonated very strongly with me. I found it interesting too that the conversations that the Vatican had been having, notably with Paul Tighe, but not only ⁓ Monsignor Tighe, others too, with leaders of Silicon Valley companies, of the big tech firms in Silicon Valley, Mark Zuckerberg of Meta

We’ve had, you know, we’ve seen more Google, Microsoft, others gathering in a number of instances over the past year or so, ever since Pope Francis’s time to talk about this, where the Vatican was able to introduce this theme, this broader theme of the wisdom of the heart. And it struck me too, that Paul Tighe was quite clear and mentioned the Vatican is not claiming

expertise in AI systems or algorithm design, which by the way, struck me too. We keep talking about the machines. It’s not machines, it’s algorithms we should be worried about. Instead, it offers something that the tech industry and many governments sorely lack. I agree with this completely, a deep concern for long-term consequences you nudged on that point, Silvia, just now, and the consistent voice on the value of human dignity, agency and solidarity. So the wisdom of the heart,

Silvia Cambie (08:09)
Thank

@nevillehobson (08:29)
is a phrase that appears in Antiqua et Nova as part of its final reflections. And it says this: “We must not lose sight of the wisdom of the heart, which reminds us that each person is a unique being of infinite value, and that the future must be shaped with and for people.” And that’s a pretty straightforward message. It’s simple. Perhaps it could be even simpler, actually, but I’ve seen others alluding to this idea of this is about people, not just the tech recently.

So for instance, subsequent to the interview, and this was actually quite recently, about a week or so back, Mustafa Suleyman, the CEO of Microsoft AI, wrote in an essay that we must build AI for people, not to be a person. In other words, AI is not a person. We hear a lot about, and I’ve had conversations with people about this, these so-called personas, the way in which you can create something that’s a duplicate of you almost. It’s like a version of you as a person.

I think that’s crazy to do that, to be frank, because that reinforces everything that we don’t want reinforcing, if you will. But Suleyman makes the case that the real risk we face is not AI suddenly waking up with consciousness, as some people talk about, but people being convinced that it has, because it’s not sentient. That’s a firm belief that I have. These are electronic devices and tools, not actual

versions of people. We’re not there yet. That’s quite a way away, I would say, if ever. But Suleyman goes on to say, this is the interesting bit to me, I want to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. We won’t always get it right. But this humanist frame provides us with a clear North Star to keep working towards. I mean, that couldn’t be simpler either, could it? And this is the head.

of a division of one of the biggest technology companies on the planet, Microsoft, saying that. And I’ve seen others in the industry saying similar things recently. So maybe this is beginning to get attention. And I can’t say, of course, that it’s a direct result only what the Vatican has been talking about, but that surely must be having an influence. So I summarized all this just for me. Quite simply, emotional, moral and ethical intelligence must guide communicators response to AI.

The big question is how,

Silvia Cambie (10:50)
Yes, indeed. And I also liked very much the article by Mustafa Suleyman because I think it’s, as you said, he pointed out the real danger, not that the machines are going to wake up and, you know, kind of take over the world and pretend they’re conscious. It’s more that they are that people.

@nevillehobson (11:06)
Take over the world.

Silvia Cambie (11:14)
will get used to interacting with them and will expect really kind of seek that human aspect in the machines and will also kind of seek approval from AI, from algorithms. That’s also something that Suleyman is cautioning us against. also, so that can create psychosis, stress, anxiety,

people being disenfranchised at work. And I think that there is a quote by UNESCO that I really like, and I have used it in the article I wrote for Strategic, the online platform for communicators. It says that ⁓ AI is about anthropological disruption, right? It’s not only how the…

@nevillehobson (11:48)
Hmm.

Silvia Cambie (12:04)
machines, the algorithms function, it’s how humans react to it. And to answer your question about what communicators can do, because indeed we are at the forefronts, we talk to people, we hear about their needs, about their anxieties and worries at work. So I think there are lot of attempts at the moment to

⁓ wrap some governance around AI, AI applications, rollouts. And what I’ve seen is, know, centers of excellence being created in companies. Those centers of excellence oversee AI pilots, for instance, the progress. So, and have, you know, the usual suspects sitting on them, which are, you know, people from IT, developers.

But I think it’s very important that communicators and change managers become part of those fora because communicators know how to talk to people. Again, they’ve been doing that for forever. That’s their bread and butter. Also, they can relate to previous tech rollouts, you know, like a workplace technology and how people had reacted to that. So there is all that.

institutional knowledge that is needed now because this shift is so unprecedented. So I kind of cringe every time people show me a COE and AI COE made up only of IT people and developers because that’s not the way to go about it. I really like a start,

statue mentioned a fact you mentioned you and Shel mentioned in a previous FIR episode that I think you were quoting ⁓ studies by MIT and HR dive which says that people are expected to use AI in their daily work but they are not receiving proper training so they are kind of very confused about you know what

you know, this is going to hit my performance indicators. What am I supposed to do? They’re not training me. How am I going to use Co-Pilot? I’m going to download Chat GPT and do my own thing and show that I’m doing something and that hopefully will be enough for my company. So all that needs to be structured. And again, communicators have the knowledge. They have the institutional memory. They have the means and also

they know where the different voices sit in a company. Like when we do research before rolling out AI, we create workshops, we’re representatives from different parts of the company. So in that case, communicators know how to spot those voices because we have worked on on rollout projects before.

@nevillehobson (15:03)
Hmm.

Silvia Cambie (15:07)
how people react, where pockets of resistance might be found in the enterprise. So I think that it is paramount that we allow communicators and change managers to participate in those bodies that are being created for AI governance. And obviously that’s also a way

to kind of channel what you were saying Neville, you know, the human aspect, what makes us human. It’s the ability to relate to other people. is insight, it is emotional intelligence. And it’s all things that are really needed these days because of this shift. And it’s kind of, you know, a paradox that we are so focused on the technology now, but at the same time,

we would need to focus even more on the human aspect because this challenge is so huge that people are just not prepared for it. And we really need to focus on the human aspect, their abilities, what makes us human in order to enable people to deal with it, right? In order to enable the training, in order to make people feel that they are equipped

@nevillehobson (16:24)
Thank

Silvia Cambie (16:26)
sufficiently equipped for it. So I also would like to, there is a quote by Pope Francis that I really like, late Pope Francis. He said, this is not an era of change. This is a change of an era. And I firmly believe in that, you know, everything we’ve been saying Neville in, also in our interview with Paul Tighe leads to that.

But I think that in order, so because this change is so huge, we really have to empower people in an unprecedented way and communicators are very well positioned for that.

@nevillehobson (17:06)
Absolutely agree with that. I think you mentioned training indeed, Shel and I discussed that topic in that recent episode of FIR where people feel they’re not getting training on the one hand and on the other hand, there are companies that just aren’t provided because they don’t think it’s worthwhile. There are others though that are doing it quite well. So it’s very, very patchy. It’s not universal. But I think

The role of the communicator then is to develop and deliver that but there’s also another aspect which to me is the it touches directly on on what we’re currently discussing, i.e. the human or the humanity element, if you will, that, you know, organizations are looking at adopting AI to improve their efficiency to ⁓ improve their productivity and they will enter scale more without looking at

this aspect? Are they asking the human questions? And that’s the role, in my opinion, of the communicator well placed to do that. So three questions I wrote down that could be where communicators are able to introduce this element in their conversation. Does this technology help deepen trust and empathy? Or does it risk eroding them? That’s a valid question, in my view.

Are we building systems that reintroduce conscience, care and context into conversations, or are we defaulting only to efficiency and output? And the third one, are we ensuring that AI strengthens our connection to each other rather than replacing those with illusions? And I think there are undoubtedly at least a dozen more, but to me, those are great ones to start with that, in a sense, force attention on this rather than just those technically

valid, yet ⁓ stale approaches to all of this. It dehumanizes, if you like. And I think the, just briefly going back to Mustafa Suleyman’s North Star, as he references quite clearly, and the Vatican’s wisdom of the heart, there’s an essential reminder in all of that, to me, which is quite simple to grasp.

To stay human in the age of AI is to place empathy, dignity and care at the heart of design and use, not simply efficiency or the ways algorithms shape our actions. Suleyman directly references that in his essay. And he’s the first technology leader I’ve seen publicly doing that the way he did. It was very clear and is a long essay, by the way, very long, worth reading. So that’s encouraging to see that and it’s worth.

Silvia Cambie (19:07)
Hmm.

Mm-hmm.

@nevillehobson (19:30)
I think communicators looking for a kind of a something to hang a hook on. This is it in my view for communicators to do that. that, and I think quite clearly is how to address the question we’ve been asking ourselves. How can communicators help democratize the conversation side of organizations? This is one way to go about it, I think.

Silvia Cambie (19:51)
Yeah, indeed. You know, conscience, care and context. Those are very important aspects when you are rolling out rolling out AI and dealing with people’s reactions. And I think those questions you asked are very powerful and they are a good start for communicators to kind of make people think, right? This isn’t just about the tech.

This isn’t just about the efficiencies this app is going to create. You’re still dealing with people. Your employees are people, your clients are people, your regulators are people. So you’re still dealing with them. And I think that it’s about empowering people to ask the right questions, right?

So I was referencing before to those COEs that are being created to monitor AI pilots in companies. Well, the conversations there tend to be very technical and always focused on the tech, know, the rollout, the different waves of the rollout. And I think, again, communicators can bring back the human aspect. How are people reacting to this?

Is this making them more happier in their work or is this making them more insecure? As you were saying before, lot of companies are not providing training or not providing the right training. So that makes them insecure. Are they getting more and more confused in the way they are ⁓ dealing with their clients and customers? Because if you have AI that takes

over part of that relationship, what is left for them to do? So there is, it’s a very complex scenario that has to be considered from different aspects. And again, you you mentioned the Suleyman’s North Star and the Wisdom of the Heart ⁓ mentioned by Monsignor Tighe. I think these are kind of, sort of, these,

thoughts can inspire communicators, can inspire people who work in AI governance and make them pause and think that it’s important to focus on these aspects, to focus on what Suleyman was saying, the fact that people might expect AI ⁓ to be, might think that AI is conscious and they might.

establish, you know, develop a relationship of a certain kind with it so that they end up depending from the algorithm, depending from, you know, expecting approval. And so I think that now is the time to stop and think and introduce those thoughts into the conversations that are going on in companies about AI. And I think

@nevillehobson (22:33)
Hmm.

Silvia Cambie (22:51)
It’s kind of, we’ve got to be brave. We have to do it. I know that often, as I was saying before, know, technology and technical aspects are basically overriding other aspects just because of the pace of ⁓ the project, just because of the pressures that people are under. But I think it’s very important to introduce these thoughts.

into the conversation. And again, you know, it’s a moving target, right? We will continue to look for voices like Monsignor Tighe, like Mustafa Suleyman I’m sure as we progress, there will be others and there will be other aspects. But I think that it’s just this…

@nevillehobson (23:17)
Mm.

Silvia Cambie (23:37)
human aspect and the interaction between ⁓ humans and AI seen from the point of view of humans that is important. And we have collected a voice from the faith community. ⁓ We have looked at the paper that Suleyman published, which is really very thought provoking. so, and we will be looking for our voices going forward.

@nevillehobson (23:52)
Mm-hmm.

Silvia Cambie (24:04)
But I think for communicators, it’s very important to continue to be open to these voices, right? It’s also a way for us to get backup and support when we need to shift the conversation in a company towards the user, towards the rights of the user, towards the dignity of the user and not just

about the technologies. then, you know, these are all tools that we can use to make our point and to make our point ⁓ stick. So I think, as I said, this is a lot of, know, the target, this is a moving target. We’ll have to do a lot more work on this, but it is fascinating for communicators because

You know, I get off often, I get asked by people in comms. So, you know, how do I shift to tech and I am not a developer and I don’t have the right knowledge in AI and I don’t know how to build an LLM. I, my answer is, is basically this, right? Make sure, bring the human voice into the conversation. do you know how to talk to the base?

you know how to collect their voices, or you know how to collect their views, make sure that they are heard, because it is important as we go forward. So I think that is what communicators can do. And that is a very important role indeed at the moment.

@nevillehobson (25:37)
Yeah, I agree with you. Now, that’s very good, Silvia. And I think just to add one final thing to this, in a sense of parallel development that is very much a part of all of this is what I’m calling the end of AI universalism, where currently we’ve got ⁓ an environment, if you will, and an assumption, let’s say that that one or two global platforms will serve the world. And we’re talking about the

tech tools, the chatbots, the means by which people connect with others and discover things themselves. And it’s Silicon Valley based and tends to be in English more than the other language. But we’re seeing some interesting things happening. Latin America. Peru is leading the charge on building a Spanish language chatbot that serves communities throughout Latin America, taking into account

cultural nuances, language differences, and the values that are unique to those communities in that part of the world that are very much not global North style environments, if you like. We’ve got what the Vatican is doing that we’ve just been discussing from that interview with Monsignor Tighe calling for an AI shaped by human dignity. And then just literally yesterday,

this this this a few days ago this past week. Saudi Arabia is asserting its cultural sovereignty, let’s put it that way in digital form, with the launch of an Arabic language chat bot called HUMAIN Chat. And it’s based on a large language model that’s Arabic, the largest in the Arabic speaking world, the developer says. And that’s intended to be

targeted at people of the Islamic faith globally, that’s two billion or so, Arabic speakers throughout the world, 400 million or so of them. At the moment, it’s just in Saudi Arabia. I’ve seen quite a bit of buzz building up about this over the past week, mostly focused on the tech, because it is quite new. The point I’m making though, is that with HUMAIN Chat and the others,

These are signs that the future of AI will not be written in only one language or framed by just one set of values. And that’s something I think we should all be paying very close attention to. And it enables, I think, some, it broadens out, if you will, the part of bringing the human element into the conversation, where you’ve got tools that can be a great help in that goal, in bringing that human part into it. So

These I find very interesting, Silvia, the are we looking at fragmentation of, of AI universalism could be enrichment, I see it. So that’s part of the picture, too. So the human element is essential to all of this. And these are all parts of the jigsaw that’s that’s rapidly being being completed, if you will.

Silvia Cambie (28:21)
Indeed.

@nevillehobson (28:32)
Are we seeing the potential risk of creating parallel AI worlds where cultural and political divides are reinforced, not bridged? That’s a risk in my view. But the humans can prevent that happening, I would say, assuming everyone’s on the same page. And as people were talking about, that is not the case, right? So it’s an interesting time. I think it’s a fascinating time to be a communicator with all this going on. Because as you pointed out, Silvia, earlier, there are

communicators who are fearful of this, who don’t know what to do about this because not getting trained. I would say that this is easy to say this and maybe not easy to actually do for some people. But grasp the nettle, as it were, as the saying goes, get to know these technology tools and how you can, in a sense, leverage them to take your humanist message to others in your organization, particularly the leadership, to bring that human voice into the conversation about deploying AI.

Silvia Cambie (29:11)
Yeah.

@nevillehobson (29:27)
and helping people understand what’s really, really, really important is to bring that human voice into it. It’s not just about efficiency, ⁓ you know, and all that stuff and speed of doing it all. It’s also about what people believe what’s in it for people, what’s the value to individuals in understanding and accepting their role and their values in something like this that’s happening. So it gives us all food for thought, right?

Silvia Cambie (29:35)
profitability.

Yeah.

Yeah, indeed. I really, I was really very happy to see those developments. And you shared with me an article a few days back that the UN, ⁓ something coming out of the UN, UN was saying that, you know, AI is too Western centric, it’s too focused on the global north and the global south is, you know, bound to suffer from that.

@nevillehobson (30:14)
Mmm.

Silvia Cambie (30:18)
Well, you and I have lived in different countries, lived in, you know, have worked in different countries and we know how important culture and diversity of points of view is. And that I think it’s very healthy to have new approaches to AI that are going to challenge the main narrative, i.e. the Silicon Valley narrative. Also, because sometimes, you know, the

messages we get out of Silicon Valley are kind of the big brother, we have the, you know, we are reaching a, a GI, no, we’re not reaching it. Well, we’ve reached it. I don’t know. So it’s kind of, you know, very strange and, and very sort of big brother. I’m going to tell you when I, when I think it’s right, but you know, at the moment I’m not telling you the truth. So I think those,

points of focus of those alternative approaches. You mentioned Latin America, you mentioned the Kingdom of Saudi Arabia. I think it’s very interesting because in that way there will be kind of competition, quote unquote, to Silicon Valley. There will also be more transparency, but also there will be an awareness of the fact.

that as Monsignor Tighe said, technology is never neutral. Technology reflects the mentality of those who create and develop it. So we want diversity of cultures, diversity of points of view. We have to make sure that we collect different voices.

and we channel those into the development and the creation of AI and AI applications. So I think this is a very exciting development, particularly the one from Saudi Arabia. I worked a while ago with Saudi.

developers and communicators on social media and social media campaigns. And they’re very, very creative and they were the forefront of social media. So I am expecting something really interesting and sophisticated from Saudi Arabia. so, yeah, so this is a very good development all in all. And also it’s a good development for communicators, right? Because communicators, a lot of our

colleagues are involved in cross-cultural communication. They work for multinationals, they have to spot those voices and bring them to the forefront. So this will be inspirational for them, right? They will be able to tell their bosses, their board, look, it’s not just this AI application that comes out of Silicon Valley that you can use. There is an AI application in Peru.

@nevillehobson (32:44)
Yes, absolutely.

Silvia Cambie (33:08)
that has a lot of users and is very efficient and effective. And why not using that for our operations in Latin America? So that gives us tools and ammunition to challenge the narrative.

@nevillehobson (33:26)
Yeah, absolutely. No, that’s very true. This has been a great conversation, Silvia. And I think the wrap up, as it were, the extension from the interview, just sharing these additional thoughts, hopefully our listeners will find that complimentary to having listened to the interview. And listeners, have, haven’t you, right? Haven’t you? You have listened to it. If not, ⁓ yeah, yeah, if you haven’t, there’ll be a link to the episode show notes in the show notes for this episode.

Silvia Cambie (33:44)
Yeah, I have, I have, absolutely.

@nevillehobson (33:54)
And indeed, much of what we discussed in this episode, there’ll be links to some of those topics in the show notes as well. So let me conclude by saying Silvia, it’s been a pleasure having you as guest co-host on this episode. So thank you very much for joining in.

Silvia Cambie (34:08)
Thank you for having me, Neville.

@nevillehobson (34:12)
So this episode is, like I said, the link to the interview with Monsignor Paul Tighe will be in the show notes. If you have any comments you’d like to share about what Silvia and I have talked about, then please do. You could do that through the usual channels that we mentioned. But particularly, you could send us a voicemail. There’s a way to do that on the FIR website.

You can send us email fircomments at gmail.com. Increasingly, we’re noticing we’re getting comments quite significantly on LinkedIn. People aren’t actually sending us directly comments anymore. That seems to have fallen out of favor. But conversations build on LinkedIn. FIR doesn’t have its own page on LinkedIn. So you’ll find those comments typically on posts from Shel or I under our own names. But increasingly, others are posting about what they heard in FIR.

So you’ll find lots there. On other social channels, we have a community page on Facebook. And we’re also we have a handle for FIR on Bluesky. And then there’s our individual ones too.

So thanks, everyone, for listening. And if Shel were here, he’d wrap this up by saying, that’ll be a 30 for this episode of For Immediate Release.

The post FIR #480: Reflections on AI, Ethics, and the Role of Communicators appeared first on FIR Podcast Network.

  continue reading

50 episodes

सभी एपिसोड

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play