Go offline with the Player FM app!
FIR #482: What Will It Take to Stop the Slop?
Manage episode 509036375 series 1391833
We’ve all heard of AI slop by now. “Workslop” is the latest play on that term, referring to low-quality, AI-generated content in the workplace that looks professional but lacks real substance. This empty, AI-produced material often creates more work for colleagues, wasting time and hindering productivity. In the longform FIR episode for September, Neville and Shel explore the sources of workslop, how big a problem it really is, and what can be done to overcome it.
Also in this episode:
- Chris Heuer, one of the founders of the Social Media Club, is at work on a manifesto for the “H Corporation,” organizations that are human-centered. A recent online discussion set the stage for Chris’s work, which he has summarized in a post.
- Three seemingly disparate studies point to the evolution of the internal communication role.
- Researchers at Amazon have proposed a framework that can make it as easy as typing a prompt to identify a very specific audience for targeted communication.
- Communicators everywhere continue to predict the demise of the humble press release, but one public relations leader has had a very different experience.
- Anthropic and OpenAI have both released reports on how people are using their tools. They are not the same.
- In his Tech Report, Dan York looks back on TypePad, the blogging platform whose shutdown is imminent; AI-generated summaries of websites from Firefox; and Mastodon’s spin on quote posts.
Links from this episode:
- Neville’s remarks on the human-centered organization, along with Chris Heuer’s original LinkedIn post
- Building a Shared Vision: Organizations Advancing Human-Centered AI
- Defining the Human Centered Organization
- The Birth of the H-Corp
- The Effects of Enterprise Social Media on Communication Networks
- AI misinformation and the value of trusted news
- Corporate Affairs is Ripe for AI Disruption
- AI-Generated “Workslop” Is Destroying Productivity
- AI ‘Workslop’ Is Killing Productivity and Making Workers Miserable
- AI “workslop” sabotages productivity, study finds
- AI isn’t replacing your job, but ‘workslop’ may be taking it over
- workslop: bad study but excellent word
- An Explainable Natural Language Framework for Identifying and Notifying Target Audiences In Enterprise Communication
- How smart brands are delivering Netflix-level personalization with AI
- We Tested a Press Release in ChatGPT. The Results Changed Everything.
- LinkedIn post from Sarah Evans on press release performance in AI search results
- Sarah Evans’ 10 PR myths
- Ethan Mollick’s LinkedIn post about how people are using AI for work
- Here’s How People Use AI, Per OpenAI, Anthropic And Ipsos Data
- OpenAI and Anthropic studied how people use ChatGPT and Claude. One big difference emerged.
- Anthropic Finds Businesses Are Mainly Using AI to Automate Work
- How people actually use ChatGPT vs Claude – and what the differences tell us
Links from Dan York’s Tech Report
- Typepad is Shutting Down
- Vimeo to be acquired by Bending Spoons in $1.38B all-cash deal
- On Firefox for iOS, summarize a page with a shake or a tap
- Introducing quote posts
- Quoting other posts – Mastodon documentation
The next monthly, long-form episode of FIR will drop on Monday, October 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz:
Hi everybody, and welcome to episode number 482 of For Immediate Release. This is our long-form episode for September 2025. I’m Shel Holtz in Concord, California.
Neville Hobson:
And hi everyone, I’m Neville Hobson in the UK.
Shel Holtz:
As I mentioned, this is our long-form episode. That means we’ll be reporting on six topics of interest to communicators. Interestingly, I think all of them are connected either directly or indirectly to artificial intelligence. I also have Dan York here with an interesting report. You and I both have a few things to say about one of the topics that Dan is reporting on.
As always with our monthly episode, we have some housekeeping before we jump into the topics. Neville, you’re going to catch us up on the items we reported on since the last episode. And we have some comments on some of these reports. That’s an opportunity to remind everybody that we love your comments—please participate in this podcast by sharing them. It doesn’t have to be about something we reported on; you can introduce a topic. This used to happen all the time in the early days of the show—someone would say, “Why don’t you guys talk about this?” and we would, and it became the content of the show. So please leave a comment on the show notes or on LinkedIn—which is where most of our comments come from these days. People leave a thought or some feedback on our announcement of the new episode. You can also do that on Facebook in multiple places. And you can always go to the website and record a comment—there’s a button that says “Leave voicemail” and you can record a 90-second comment. Or send us an MP3 file to [email protected]. Lots of ways to participate in the show, and we really hope you will. We’ll share some of the comments we received in the last month, Neville, as you remind us what we talked about.
Neville Hobson:
Indeed. September was kind of an odd month because you were on holiday for two weeks and we didn’t do any live recording during that time, but we had a couple of short recordings tucked in our back pocket to publish in the interim. Let’s share what we did since the last monthly episode for August—that was on the 25th of August, episode 478. Our lead story explored when corporate silence backfires and how communicators can help leaders make better choices. We also discussed AI PR deepfakes and more, including Dan York’s report. That was a 90-minute show like this one is likely to be.
Episode 479 on the 1st of September—“Hacking AI Optimization vs. Doing the Hard Work.” Amid the rise of GEO, we discussed how brands are seeking workarounds to appear in AI-generated answers, but shortcuts don’t build trust. Old-school PR and marketing still matter. We got a comment on that one, right, Shel?
Shel Holtz:
We did, from Frank Diaz, who’s become one of our loyal listeners. He said, “This was my conclusion as well. Once I filtered everything…” He shared a checklist on mastering AI citation strategy for SEO on LinkedIn. We’ll include a link in the show notes for context.
Neville Hobson:
On the 9th of September we published episode 480, “Reflections on AI Ethics and the Role of Communicators.” You had already gone on holiday. This was a conversation between Sylvia Cambie—who was a guest co-host back in July when we interviewed Monsignor Paul Tighe from the Vatican about artificial intelligence—and me. We picked up where that interview left off with reflections on AI dignity, what really matters, and what caught our attention. Interestingly, that episode got more listens and downloads than the interview itself—perhaps because people were catching up after summer.
Episode 481 on the 24th of September—so you can see a nearly two-week gap there—“The M-Panic: AI Writing and Misguided Assumptions. Can Tone and Authenticity Survive AI Polish?” We dove into the “M-kerfuffle” that’s had communicators divided much of this year. You also explained how you turned 28 blog posts into a forthcoming book with AI’s help—classic AI-assisted writing in action. We had comments too, didn’t we?
Shel Holtz:
We did. Daniel Pauling wrote that the dash doesn’t solely come from training data—a point I’d made, saying the reason you see so many dashes is because the training sets include a lot of dash-heavy content. Daniel said it also comes from how generative AI is programmed to be more friendly and what it associates with friendliness. He referenced a post where he went into detail—we’ll include that link. John Cass had a thoughtful comment about how writing wasn’t rigid in the 17th century—Shakespeare even spelled his name multiple ways—arguing that language is a visual representation of speech and we should speak the language of our audience, not the textbook. He suggested anxiety around AI and writing often comes from our best writers, but human creativity is collective. I noted that Chris Penn recently wrote that AI won’t hurt creativity because creative people will keep creating. We saw that on a cruise art auction: people bidding thousands on works by young artists. The creative impulse persists. John replied that’s true, though some high-level creatives feel AI disrupts their thinking—maybe true for a few people; not everyone thinks the same way.
Neville Hobson:
Good essay-comment from John. And between all of that we recorded a new interview just before you went away—Stephanie Grober in New York, quite an authority on GEO—Generative Engine Optimization. The anchor for the conversation was: is GEO the next SEO, a passing fad, or good comms practice in disguise? We talked about what GEO means for communicators today and what to do about it. That was published on the 16th of September. So we had five podcast episodes since August—not bad, considering you were away half the month.
Shel Holtz:
Not bad at all. We are a content machine. And that machine continues with Circle of Fellows—the monthly panel among IABC Fellows. I was at sea for this one; Brad Whitworth moderated a discussion about what it means for communicators that hybrid appears to be winning as the preferred workplace configuration. Priya Bates, Ritzi Ronquillo, and Angela Sinickas participated. It’s up on the FIR Podcast Network.
The October episode sounds great: number 121. I’ll moderate at noon Eastern on Thursday, October 23. The topic: evolving roles and strategic goals. The description from Anna Willey: communicators are adapting alongside new tools and channels; strategic goals must align with organizational objectives as they impact brand reputation, enhance internal communications, and address ongoing change. Panelists so far: Lori Dawkins, Amanda Hamilton-Atwell, and Mike Klein, with a fourth to be named. You can join live and participate or catch the podcast. That wraps up our housekeeping—we managed to do it in less than 13 minutes. Right after this, we’ll be back with our Topics of the Month.
Neville Hobson:
Our first topic for September: Throughout this year we’ve returned again and again to one central theme—AI must be about people, not just machines. Whether we were talking about the future of work, managers’ roles in an AI-enabled world, or the Vatican’s perspective on AI ethics (“the wisdom of the heart” in our July interview with Monsignor Paul Tighe), the question has been the same: How do we ensure technology serves humanity rather than the other way around?
That’s the context for Chris Heuer’s latest work. Chris is an internet pioneer and serial entrepreneur—many of you will know him from the Social Media Club nonprofit he founded in the early days of social media, which reached 350 cities globally. Building on an online brainstorm Chris led on September 17—more than 50 people connected worldwide to discuss defining human-centered organizations, which I joined and wrote about on LinkedIn—Chris has published a follow-up titled “The Birth of the H-Corp” (H for humanity). It’s a bold attempt to define what organizations owe humanity in the age of AI.
The central concern: efficiency has become the dominant corporate narrative. He cites Shopify’s CEO saying managers must prove why AI can’t do the job before hiring another human. We referenced that in an FIR episode this summer. That kind of AI-first thinking risks eroding human dignity. Chris argues for an alternative: organizations must enable humans rather than replace them, reinvest AI gains back into people, and make empathy and ethics structural rather than optional.
What’s powerful is the recognition of tensions—for example, how AI can hollow out junior roles and undermine leadership pipelines. Participants flagged cultural sovereignty—the idea that AI shouldn’t just reflect Silicon Valley’s worldview but the diversity of human society. Chris’s goal is to draft an H-Corp manifesto later this year—likely November—likening this to the naming moment of social media: a concept that crystallizes shared ambitions and sparks a broader movement. It won’t be perfect, but it could serve as a north star for organizations that want to put human flourishing at the center of AI adoption.
For communicators, this is an important conversation: How do we frame the internal narrative so AI isn’t just about productivity and cost-cutting, but about augmenting human potential? How do we give shape to something like H-Corp so it doesn’t remain an ideal but becomes practical reality? It’s not about resisting AI or slowing progress; it’s about making deliberate choices so organizations put people at the center of change. Will communicators, leaders, and organizations seize the opportunity to shape AI for human flourishing—or let the technology shape us by default? Could H-Corp become a rallying concept as ESG or CSR did—or will it get diluted into corporate sloganeering? What role should communicators play to keep it real and practical? I bet you’ve got ideas.
Shel Holtz:
Of course—otherwise why would we be here? First, I think AI is less responsible for this situation than the general nature of business, especially in a capitalist society. I’m not going to get philosophical about capitalism—I’m a proud capitalist. I like making money and would like to make more. If the goal of an organization—especially a public corporation with fiduciary responsibilities—is to earn a return for investors, then when AI comes along it makes complete sense that leaders ask, “How does this help us maximize returns?” Reducing costs and staff and having a machine work 24/7—of course that’s where leaders go first. It doesn’t mean that’s what organizations should do, but I get why they do it.
Also, with any new technology, the first thing we do is what we were already doing—but better. The earliest uses no one had thought of come later. I don’t think generative AI has been around long enough for that next phase yet. We’re still using it to do what we’ve been doing; later we’ll discover new, more human-centered applications. For some organizations that will come; for others, it won’t—they’ll stay focused on maximizing profit.
Another issue: most organizations aren’t tackling AI strategically in its early days. There’s ample data showing people aren’t looking at this holistically. I was just talking at my company about the entry-level construction job—project engineer. Much of that role may be automated. Submittals, for example, take time and expertise; AI could produce them in minutes with the right inputs. Does that mean fewer project engineers? Our conversation was: how do we redefine the role so they still learn what they need to move up—project manager, project director, superintendent, whatever? The job won’t be the same, but it remains foundational. Same in communications: the entry-level comms role won’t be the same job in five years. Does the job go away—or do we rethink it? Smart organizations will rethink it—that’s a humanistic approach because we’re not dispensing with the role; we’re redefining it.
Neville Hobson:
It’s a big topic. I don’t disagree that some companies will shut their eyes to anything beyond “we use this to make money.” But the conversation—at the heart of what Chris is talking about—is helping organizations see the people. Language matters, too—how we talk about “replacing” versus “augmenting” can devalue human work.
Another argument from the brainstorm: human-centered talk often defaults to privileged voices and excludes marginalized groups. There’s a perception that there’s one version of AI—English, global north. What about the global south? Some countries have launched Spanish-language chatbots relevant to their populations; ChatGPT may not be the relevant tool for them. We should stimulate conversations in organizations: “Yes, but think about this as well.” That can create discord, but it’s necessary.
This idea is worth promoting: don’t devalue people. Put them first. Yes, aim for profit—but how do we help our people help us make that profit? People suffer in change; they’re often last in line when tech is deployed. Let’s bring empathy back into organizations. The landscape is changing at light speed—new capabilities, “pulse” updates to mobiles, etc. I think Chris Heuer’s offering could become a rallying concept. With influential voices like the Vatican and others globally, maybe it gathers steam.
Shel Holtz:
It could. My skepticism is about incentives. Leaders are obliged to produce maximum returns. How do we connect the dots so they see something in this change aligned with their goals? That’s what I want to see in the manifesto—most manifestos dwell on what’s wrong and not how to fix it.
Neville Hobson:
Right—so the H-Corp manifesto expected in November becomes the template to address those questions: how do we include X, Y, Z? I sensed a groundswell of willingness on the Sept. 17 call. It’s a small group; getting the word out may persuade others to get involved. You’ve got to start somewhere. This could be a rallying concept.
Shel Holtz:
I’ll predict that in November this will be a theme for one of our FIR episodes.
Neville Hobson:
Maybe we interview someone—perhaps Chris.
Shel Holtz:
Could be Chris.
Neville Hobson:
If this strikes a chord, go to the Humanizing AI Substack (link in show notes), read Chris’s post introducing the H-Corp manifesto, and see if you want to get involved. It’s open—share ideas and see what happens.
Shel Holtz:
One thread running through much of our coverage is how digital tech is reshaping organizational communication, minute by minute. Three new reports over the last couple of weeks are fascinating on their own; together they create a big picture communicators must grapple with.
First: a major new study of enterprise social media (internal platforms like Slack, Teams, Viva Engage). Researchers studied 99 organizations adopting Microsoft Viva Engage (which grew out of Yammer). Enterprise social media made communication networks denser, more connected, and more democratic. Employees didn’t just talk to the same people—they formed new ties, especially weak ties across teams that spark ideas. Leaders and employees connected more directly, and influence was more distributed. Viva Engage, unlike siloed Teams channels, enables more open conversations around broader themes. This change breaks down silos and fosters innovation—critical when hybrid/remote work can leave people isolated.
Second: Boston Consulting Group estimates more than 80% of corporate affairs work, including communication, could be augmented or even automated by AI. The biggest gains come when organizations redesign processes around AI—not just bolt it on. For communicators: think proactively not just about writing faster, but re-imagining workflows with AI in the mix.
Third: VoxEU points to a serious risk: as AI makes it easier and cheaper to produce plausible misinformation, the value of trusted, credible information goes up—externally and internally. If employees can’t tell what’s credible—about competitors, market conditions, or even their own company—their decision-making is compromised. If misinformation creeps into internal channels, it can spread quickly through the very networks making us more connected.
Put together: enterprise social media can make networks open and innovative, but they’re vulnerable if we don’t ensure accuracy and trust. Hovering over that is BCG’s reminder that AI will disrupt a huge portion of what communicators do. The challenge: if we don’t take responsibility for credibility and quality, AI will amplify misinformation and mistrust. The opportunity: use AI thoughtfully to improve connection and personalization while leaning into our role as stewards of trusted information. Connection without credibility is fragile; credibility without connection is limited. Our job is to deliver both.
Neville Hobson:
Big challenge. The VoxEU report on AI misinformation and trusted news stood out. One interesting finding: once misinformation was identified, people didn’t disengage—they consumed more. Treated individuals were more likely to maintain subscriptions months later. The report’s conclusion: when the threat of misinformation becomes salient, the value of credible news increases. How do you put that in place inside an organization?
Shel Holtz:
I remember my first corporate comms job at ARCO. Our weekly employee paper had bylines so employees knew who to call with story ideas. Bylines also establish credible sources—names employees learn to trust. As networks flood with information, people will gravitate to known credible voices. The same is true externally with content marketing: put a person behind the content so audiences recognize trustworthy outputs. We’ll need to build credibility with our reporters, thought leaders, and SMEs—internally and externally—so they become beacons of trust amid misinformation.
Neville Hobson:
VoxEU (focused on media) says if outlets maintain trust, the rise of synthetic content becomes an opportunity: as trust grows scarcer, its value rises, and audiences may be more willing to pay. Translate internally: employees won’t “pay,” but they will give attention to reliable, trustworthy writing—especially when the author is identified and credible. That seems like common sense.
Shel Holtz:
Agreed. Some employees don’t care what’s going on; they just do their job and go home. But if they’re overwhelmed with plausible-sounding contradictions, internal communications can become the trusted voice. People who didn’t pay attention before may start following channels and authors they’ve come to trust—if we consistently produce credible content.
Neville Hobson:
One line from VoxEU’s conclusion fits perfectly: the threshold to trustworthiness rises with the volume and sophistication of misinformation, meaning media outlets can’t stand still; they must continually invest in helping readers distinguish fact from fabrication, keeping pace with AI. Fits internally, too.
Shel Holtz:
Combine that with the other two reports: use enterprise social networks as channels for credible information and conversation, and use the BCG disruption to redefine our work so our time remains valuable even as 80% of tasks change.
Neville Hobson:
Okay, another buzzword: “work slop”—content that looks polished but is shallow or misleading, created with AI and dumped on colleagues to sort out. Harvard Business Review argued work slop is a major reason companies aren’t seeing ROI from AI; 40% of employees are dealing with it. But there’s a critique in Pivot to AI saying the data came from an unfiltered BetterUp survey—calling HBR’s article an unlabeled advertorial that shifts blame onto workers while pitching enterprise tools.
So two threads: “work slop” is a brilliant label for a real problem; but some coverage may itself be work slop. Questions: Is work slop a real productivity killer or just a catchy buzzword? What responsibility lies with leadership vs. employees? And how should we treat research that blurs into marketing?
Shel Holtz:
I think it’s real, though I don’t know that it’s as dire as painted. The first time I saw “work slop” was from Ethan Mollick on LinkedIn. He echoed Pivot to AI’s point: the term can shift blame onto employees told to “be more productive with AI” without leaders doing the hard work of rethinking processes or defining good use. Poor output becomes “AI’s fault”—that’s not leadership. For communicators, we should advocate responsible AI use from the top down, not just coach employees to cope.
Also, this is new-tech déjà vu. Remember desktop publishing? Suddenly every department cranked out a newsletter—because they could. It created information overload until companies set guidelines. Today, many orgs haven’t offered training, guidance, or frameworks for AI. People are experimenting—good!—but without prompt skills or evaluation skills, they’ll create work slop. We’ll see a lot of it until organizations get strategic about AI and define expectations and verification. We even did an episode on “verification” becoming a role—someone checking outputs for accuracy and credibility. We’ll see if that shakes out, but that’s where work slop comes from. I don’t think it’s a long-term problem; it will resolve like the 86 departmental newsletters did.
Neville Hobson:
How do we address the eruption of AI-generated content? Even if it isn’t outright wrong, it’s too much to read—hurting productivity.
Shel Holtz:
Organizations need a strategic approach. Our CEO often says there will be a day the switch flips; if you’re not ready, you’re irrelevant. The orgs allowing prodigious work slop haven’t reached—or acted on—that conclusion. They need governance, training, and clear “assist vs. automate” boundaries.
Neville Hobson:
Thanks very much, Dan—terrific report. TypePad caught my attention. I was on TypePad from 2004, moved to WordPress in 2006, kept TypePad as an archive until 2021. Interesting—and urgent—to hear it’s ending. Migration is easy except images; that’s not trivial. I know three people still on TypePad with no idea the door’s about to shut. Good callout.
Shel Holtz:
I was never a TypePad user, but many early influential blogs were there—Heather Armstrong’s Dooce, PostSecret, Freakonomics before it became a podcast. We’ve been doing this long enough to cover birth, life, and death.
Neville Hobson:
We have. Dan also mentioned Mastodon introducing commenting—probably a big deal. I’m not hugely active there. What do you think?
Shel Holtz:
I still have a Mastodon instance—interested to dig in. I was more intrigued by the Vimeo item. They’ve struggled to define a niche in YouTube’s world—often pitching private, high-quality business video hosting. I still get pitched. But one constant headache in internal comms is getting the right message to the right people. If you’ve ever sent a company-wide update because you weren’t sure who needed it—or spent hours hunting down the right list—you know the pain.
That’s why a research paper from Amazon’s Reliability and Maintenance Engineering team caught my eye: “An explainable natural language framework for identifying and notifying target audiences in enterprise communication.” In plain terms: a system that lets you ask in natural language who should get your message, then the AI not only finds the audience but explains how it got there.
Example: “Reach all maintenance technicians working with VendorX’s conveyor belts at European sites.” The system parses that, queries a knowledge graph of equipment, people, and locations, and returns the audience—with reasoning (“here’s how we matched VendorX… here’s how we identified European facilities…”). Explainability is critical; in safety contexts you can’t trust a black box.
Implications: we struggle with over- and under-communication. We over-broadcast because we don’t trust lists; we miss people who need the info. A framework like this could make targeting as easy as writing a sentence—with transparency you can trust. It mirrors marketing’s move from “Hi, {FirstName}” to real context-aware personalization. Employees aren’t different from customers: they don’t want spam; they want relevant comms.
Challenges: privacy concerns about graphing people and work, the need to validate reasoning, and data quality (titles, records). But it’s a blueprint for rethinking audience targeting—imagine HR or IT change comms targeting precisely with explainability. It’s research, not a product yet, but communicators should watch this closely.
Neville Hobson:
Good explanation. I couldn’t find a simple one online—Amazon’s site doesn’t surface it well. Amusingly, an AI Overview explained it best, which is a good illustration of why traditional search is fading. My question: is this live at Amazon?
Shel Holtz:
I don’t think so—it’s a framework for consideration. Presumably they’d build it for Amazon first, then maybe market or license it. If you’ve worked in internal comms, you know targeting is hard; the info you need often isn’t accessible. This gives you the ability to do it—and verify it. I can’t wait to try it someday.
Neville Hobson:
Calendar marker set. Share that AI Overview with me—I’ll screenshot it.
Shel Holtz:
Copy and paste works, too.
Neville Hobson:
In recent episodes we’ve explored how the press release keeps reinventing itself. Far from dead, it moved from media distribution to SEO—and now, according to Sarah Evans (partner and head of PR at Zen Media), into a generative-AI visibility engine. In a long read, she describes testing a press release about Zen Media acquiring Optimum7, distributed via GlobeNewswire, then tracking how it was picked up not just by news outlets but by AI systems like ChatGPT, Perplexity, and Meta AI. Within six hours, ChatGPT cited it 40 times; Meta’s external agents 17 times; Perplexity Bot and Applebot twice. Evans said there were 61 documented AI mentions in total in that period.
Implications: press releases aren’t just reaching journalists or Google—they’re feeding AI systems people use to ask questions and make decisions. The key metric becomes: is it retrievable when AI is asked a critical question in our space?
We’ve covered this angle a few times—from “Is PR dead (again)?” to the reinvention of releases for algorithms—and now this: the release as a tool for persistent retrievability and authority in the age of AI. If AI engines are the new gatekeepers, how should communicators rethink writing and measurement? What do you think, Shel?
Shel Holtz:
I’m glad you’re citing Sarah Evans—she’s terrific. We should invite her on the interview show. In another post—“10 PR myths I’m seeing”—she debunks “Press releases don’t matter.” She says they matter more than ever. We’re seeing early results with releases averaging about 285 citations in ChatGPT within 30 days. That suggests LLMs treat press releases as credible sources—especially when picked up in reputable places.
She also talks structured information. Gini Dietrich recently suggested having a page on your site not linked anywhere—meant for AI crawlers—with structured/markdown versions of the content so AI can better understand and apply it. Bottom line: press releases aren’t going anywhere. Every time someone proclaims them dead, they persist. (Side rant: embargoes aren’t real unless we agreed in advance.)
Neville Hobson:
I ranted about embargoes recently too. One question: what does retrievability mean for communicators? If AI engines, not journalists or search engines, arbitrate visibility, how do PR teams measure success differently? Are AI citations more valuable than traditional pickups?
Shel Holtz:
Both are valuable. Search has declined, but not to zero—not even to 50%. People still search. Some haven’t adopted AI for search; some queries are better served by ten links than an overview. (When we were in Rome, I searched for restaurants open before 7 p.m.—classic Google links job.) Sarah Evans’s myth list also says “Choose traditional or modern PR” is false—the strongest strategies use a dual pathway. As Mitch Joel says, “and,” not “instead of.”
Neville Hobson:
Worth reading Sarah’s Substack—and the link you’ll put in the notes—to make you think.
Shel Holtz:
Absolutely. I’ll probably be doing a press release in the next week or two—can’t say more, but it’s coming. One more debate: how people actually use AI. Ethan Mollick argues people lean on AI for higher-level cognitive tasks—framing, sense-making, brainstorming—rather than just automating grunt work. Recent usage studies from OpenAI and Anthropic offer fresh data.
OpenAI’s analysis of ChatGPT usage shows augmentation—writing, editing, summarizing, brainstorming, decision support. “Asking” (decision support) has become the largest slice—aligning with “thinking partner.” Anthropic paints a different enterprise picture for Claude: businesses use it chiefly to automate workflows, coding, math-heavy tasks, document processing, and reporting pipelines. Automation exceeds augmentation; some quantify ~three-quarters of enterprise use as automation patterns.
Zooming out, Forbes’s overview with OpenAI, Anthropic, and Ipsos notes adoption is broadening fast, trust is uneven, and behaviors vary by context. ZDNet frames it succinctly: ChatGPT is mostly used for writing and decision support (often non-work or para-work tasks), while Claude skews toward structured enterprise automation—coding and back-office flows.
So where does that leave Mollick’s claim? Both realities are true depending on the user and context. Among general knowledge workers, AI is a thinking companion; among engineering/operations teams and API-wired apps, AI acts as an automation substrate.
Implications for communicators:
AI is in the room at the idea stage—become editors, synthesizers, and standard-setters.
Automation is marching into comms-adjacent workflows—govern quality, provenance, and accountability.
Don’t pick a side; design for both—declare the “assist vs. automate” boundary, instrument the pipeline with checks and tags, build a “thinking partner” prompt bench, and mind the labor story by narrating changes transparently.
Neville Hobson:
Good advice. ZDNet’s split between personal and non-personal use was interesting. I’m a mixed user—lots of research that’s kind of work-related. I use ChatGPT mostly; not Claude lately. One note: I used ChatGPT for coding when I rebuilt my website on Ghost—editing theme templates with Handlebars. ChatGPT was astonishingly good—especially after GPT-5—at brainstorming workarounds and generating CSS/JS tweaks that worked perfectly on first publish. Claude also pinpointed issues following theme dev instructions. For a non-coder, that was huge confidence. These tools were brilliant alongside Docker and VS Code. I’m impressed.
Shel Holtz:
No question ChatGPT does very well with code; all frontier LLMs do. Claude currently tops some benchmarks (SWE-Bench, HumanEval, etc.) and is marketed heavily to developers, with strong APIs and tool integrations. OpenAI pushes ChatGPT more broadly—consumer and enterprise—so you see “what should I wear tonight?” and recipes alongside enterprise tasks. I’ve read that Gen Z uses AI to make basic day-to-day decisions—fascinating.
Neville Hobson:
The report says ChatGPT usage hit 700 million weekly users as of July. Growth is relatively faster in low- and middle-income countries. Early users were ~80% men; now about 48%, with more active users having typically feminine first names (their method). Useful metrics—but what does it mean to the average communicator? Hopefully they don’t walk away thinking “ChatGPT is only for coding.”
Shel Holtz:
Right—the point is not either/or. There are two valid use modes—collaborative and automated. Provide resources, tools, policies, guardrails, and governance so people can use both modes effectively. You can’t have a Wild West; you need standards you can support. But beware of swinging too far. I spoke to a company restricting staff to an internal AI that can’t do what NotebookLM does—hamstringing themselves. Organizations need to be pragmatic.
And that will wrap up episode 482, our long-form episode for September 2025.
Neville Hobson:
Market conditions will impact that approach, I bet. Okay.
Shel Holtz:
Right now we’re planning to record our long-form October episode on Saturday the 25th or Sunday the 26th—depends on Neville’s schedule. Either works for me. That episode will drop Monday, October 27. We’ll have short mid-week episodes in between, maybe even an interview—we’re lining up one or two. Until then, that will be a 30 for this episode of For Immediate Release.
The post FIR #482: What Will It Take to Stop the Slop? appeared first on FIR Podcast Network.
139 episodes
Manage episode 509036375 series 1391833
We’ve all heard of AI slop by now. “Workslop” is the latest play on that term, referring to low-quality, AI-generated content in the workplace that looks professional but lacks real substance. This empty, AI-produced material often creates more work for colleagues, wasting time and hindering productivity. In the longform FIR episode for September, Neville and Shel explore the sources of workslop, how big a problem it really is, and what can be done to overcome it.
Also in this episode:
- Chris Heuer, one of the founders of the Social Media Club, is at work on a manifesto for the “H Corporation,” organizations that are human-centered. A recent online discussion set the stage for Chris’s work, which he has summarized in a post.
- Three seemingly disparate studies point to the evolution of the internal communication role.
- Researchers at Amazon have proposed a framework that can make it as easy as typing a prompt to identify a very specific audience for targeted communication.
- Communicators everywhere continue to predict the demise of the humble press release, but one public relations leader has had a very different experience.
- Anthropic and OpenAI have both released reports on how people are using their tools. They are not the same.
- In his Tech Report, Dan York looks back on TypePad, the blogging platform whose shutdown is imminent; AI-generated summaries of websites from Firefox; and Mastodon’s spin on quote posts.
Links from this episode:
- Neville’s remarks on the human-centered organization, along with Chris Heuer’s original LinkedIn post
- Building a Shared Vision: Organizations Advancing Human-Centered AI
- Defining the Human Centered Organization
- The Birth of the H-Corp
- The Effects of Enterprise Social Media on Communication Networks
- AI misinformation and the value of trusted news
- Corporate Affairs is Ripe for AI Disruption
- AI-Generated “Workslop” Is Destroying Productivity
- AI ‘Workslop’ Is Killing Productivity and Making Workers Miserable
- AI “workslop” sabotages productivity, study finds
- AI isn’t replacing your job, but ‘workslop’ may be taking it over
- workslop: bad study but excellent word
- An Explainable Natural Language Framework for Identifying and Notifying Target Audiences In Enterprise Communication
- How smart brands are delivering Netflix-level personalization with AI
- We Tested a Press Release in ChatGPT. The Results Changed Everything.
- LinkedIn post from Sarah Evans on press release performance in AI search results
- Sarah Evans’ 10 PR myths
- Ethan Mollick’s LinkedIn post about how people are using AI for work
- Here’s How People Use AI, Per OpenAI, Anthropic And Ipsos Data
- OpenAI and Anthropic studied how people use ChatGPT and Claude. One big difference emerged.
- Anthropic Finds Businesses Are Mainly Using AI to Automate Work
- How people actually use ChatGPT vs Claude – and what the differences tell us
Links from Dan York’s Tech Report
- Typepad is Shutting Down
- Vimeo to be acquired by Bending Spoons in $1.38B all-cash deal
- On Firefox for iOS, summarize a page with a shake or a tap
- Introducing quote posts
- Quoting other posts – Mastodon documentation
The next monthly, long-form episode of FIR will drop on Monday, October 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz:
Hi everybody, and welcome to episode number 482 of For Immediate Release. This is our long-form episode for September 2025. I’m Shel Holtz in Concord, California.
Neville Hobson:
And hi everyone, I’m Neville Hobson in the UK.
Shel Holtz:
As I mentioned, this is our long-form episode. That means we’ll be reporting on six topics of interest to communicators. Interestingly, I think all of them are connected either directly or indirectly to artificial intelligence. I also have Dan York here with an interesting report. You and I both have a few things to say about one of the topics that Dan is reporting on.
As always with our monthly episode, we have some housekeeping before we jump into the topics. Neville, you’re going to catch us up on the items we reported on since the last episode. And we have some comments on some of these reports. That’s an opportunity to remind everybody that we love your comments—please participate in this podcast by sharing them. It doesn’t have to be about something we reported on; you can introduce a topic. This used to happen all the time in the early days of the show—someone would say, “Why don’t you guys talk about this?” and we would, and it became the content of the show. So please leave a comment on the show notes or on LinkedIn—which is where most of our comments come from these days. People leave a thought or some feedback on our announcement of the new episode. You can also do that on Facebook in multiple places. And you can always go to the website and record a comment—there’s a button that says “Leave voicemail” and you can record a 90-second comment. Or send us an MP3 file to [email protected]. Lots of ways to participate in the show, and we really hope you will. We’ll share some of the comments we received in the last month, Neville, as you remind us what we talked about.
Neville Hobson:
Indeed. September was kind of an odd month because you were on holiday for two weeks and we didn’t do any live recording during that time, but we had a couple of short recordings tucked in our back pocket to publish in the interim. Let’s share what we did since the last monthly episode for August—that was on the 25th of August, episode 478. Our lead story explored when corporate silence backfires and how communicators can help leaders make better choices. We also discussed AI PR deepfakes and more, including Dan York’s report. That was a 90-minute show like this one is likely to be.
Episode 479 on the 1st of September—“Hacking AI Optimization vs. Doing the Hard Work.” Amid the rise of GEO, we discussed how brands are seeking workarounds to appear in AI-generated answers, but shortcuts don’t build trust. Old-school PR and marketing still matter. We got a comment on that one, right, Shel?
Shel Holtz:
We did, from Frank Diaz, who’s become one of our loyal listeners. He said, “This was my conclusion as well. Once I filtered everything…” He shared a checklist on mastering AI citation strategy for SEO on LinkedIn. We’ll include a link in the show notes for context.
Neville Hobson:
On the 9th of September we published episode 480, “Reflections on AI Ethics and the Role of Communicators.” You had already gone on holiday. This was a conversation between Sylvia Cambie—who was a guest co-host back in July when we interviewed Monsignor Paul Tighe from the Vatican about artificial intelligence—and me. We picked up where that interview left off with reflections on AI dignity, what really matters, and what caught our attention. Interestingly, that episode got more listens and downloads than the interview itself—perhaps because people were catching up after summer.
Episode 481 on the 24th of September—so you can see a nearly two-week gap there—“The M-Panic: AI Writing and Misguided Assumptions. Can Tone and Authenticity Survive AI Polish?” We dove into the “M-kerfuffle” that’s had communicators divided much of this year. You also explained how you turned 28 blog posts into a forthcoming book with AI’s help—classic AI-assisted writing in action. We had comments too, didn’t we?
Shel Holtz:
We did. Daniel Pauling wrote that the dash doesn’t solely come from training data—a point I’d made, saying the reason you see so many dashes is because the training sets include a lot of dash-heavy content. Daniel said it also comes from how generative AI is programmed to be more friendly and what it associates with friendliness. He referenced a post where he went into detail—we’ll include that link. John Cass had a thoughtful comment about how writing wasn’t rigid in the 17th century—Shakespeare even spelled his name multiple ways—arguing that language is a visual representation of speech and we should speak the language of our audience, not the textbook. He suggested anxiety around AI and writing often comes from our best writers, but human creativity is collective. I noted that Chris Penn recently wrote that AI won’t hurt creativity because creative people will keep creating. We saw that on a cruise art auction: people bidding thousands on works by young artists. The creative impulse persists. John replied that’s true, though some high-level creatives feel AI disrupts their thinking—maybe true for a few people; not everyone thinks the same way.
Neville Hobson:
Good essay-comment from John. And between all of that we recorded a new interview just before you went away—Stephanie Grober in New York, quite an authority on GEO—Generative Engine Optimization. The anchor for the conversation was: is GEO the next SEO, a passing fad, or good comms practice in disguise? We talked about what GEO means for communicators today and what to do about it. That was published on the 16th of September. So we had five podcast episodes since August—not bad, considering you were away half the month.
Shel Holtz:
Not bad at all. We are a content machine. And that machine continues with Circle of Fellows—the monthly panel among IABC Fellows. I was at sea for this one; Brad Whitworth moderated a discussion about what it means for communicators that hybrid appears to be winning as the preferred workplace configuration. Priya Bates, Ritzi Ronquillo, and Angela Sinickas participated. It’s up on the FIR Podcast Network.
The October episode sounds great: number 121. I’ll moderate at noon Eastern on Thursday, October 23. The topic: evolving roles and strategic goals. The description from Anna Willey: communicators are adapting alongside new tools and channels; strategic goals must align with organizational objectives as they impact brand reputation, enhance internal communications, and address ongoing change. Panelists so far: Lori Dawkins, Amanda Hamilton-Atwell, and Mike Klein, with a fourth to be named. You can join live and participate or catch the podcast. That wraps up our housekeeping—we managed to do it in less than 13 minutes. Right after this, we’ll be back with our Topics of the Month.
Neville Hobson:
Our first topic for September: Throughout this year we’ve returned again and again to one central theme—AI must be about people, not just machines. Whether we were talking about the future of work, managers’ roles in an AI-enabled world, or the Vatican’s perspective on AI ethics (“the wisdom of the heart” in our July interview with Monsignor Paul Tighe), the question has been the same: How do we ensure technology serves humanity rather than the other way around?
That’s the context for Chris Heuer’s latest work. Chris is an internet pioneer and serial entrepreneur—many of you will know him from the Social Media Club nonprofit he founded in the early days of social media, which reached 350 cities globally. Building on an online brainstorm Chris led on September 17—more than 50 people connected worldwide to discuss defining human-centered organizations, which I joined and wrote about on LinkedIn—Chris has published a follow-up titled “The Birth of the H-Corp” (H for humanity). It’s a bold attempt to define what organizations owe humanity in the age of AI.
The central concern: efficiency has become the dominant corporate narrative. He cites Shopify’s CEO saying managers must prove why AI can’t do the job before hiring another human. We referenced that in an FIR episode this summer. That kind of AI-first thinking risks eroding human dignity. Chris argues for an alternative: organizations must enable humans rather than replace them, reinvest AI gains back into people, and make empathy and ethics structural rather than optional.
What’s powerful is the recognition of tensions—for example, how AI can hollow out junior roles and undermine leadership pipelines. Participants flagged cultural sovereignty—the idea that AI shouldn’t just reflect Silicon Valley’s worldview but the diversity of human society. Chris’s goal is to draft an H-Corp manifesto later this year—likely November—likening this to the naming moment of social media: a concept that crystallizes shared ambitions and sparks a broader movement. It won’t be perfect, but it could serve as a north star for organizations that want to put human flourishing at the center of AI adoption.
For communicators, this is an important conversation: How do we frame the internal narrative so AI isn’t just about productivity and cost-cutting, but about augmenting human potential? How do we give shape to something like H-Corp so it doesn’t remain an ideal but becomes practical reality? It’s not about resisting AI or slowing progress; it’s about making deliberate choices so organizations put people at the center of change. Will communicators, leaders, and organizations seize the opportunity to shape AI for human flourishing—or let the technology shape us by default? Could H-Corp become a rallying concept as ESG or CSR did—or will it get diluted into corporate sloganeering? What role should communicators play to keep it real and practical? I bet you’ve got ideas.
Shel Holtz:
Of course—otherwise why would we be here? First, I think AI is less responsible for this situation than the general nature of business, especially in a capitalist society. I’m not going to get philosophical about capitalism—I’m a proud capitalist. I like making money and would like to make more. If the goal of an organization—especially a public corporation with fiduciary responsibilities—is to earn a return for investors, then when AI comes along it makes complete sense that leaders ask, “How does this help us maximize returns?” Reducing costs and staff and having a machine work 24/7—of course that’s where leaders go first. It doesn’t mean that’s what organizations should do, but I get why they do it.
Also, with any new technology, the first thing we do is what we were already doing—but better. The earliest uses no one had thought of come later. I don’t think generative AI has been around long enough for that next phase yet. We’re still using it to do what we’ve been doing; later we’ll discover new, more human-centered applications. For some organizations that will come; for others, it won’t—they’ll stay focused on maximizing profit.
Another issue: most organizations aren’t tackling AI strategically in its early days. There’s ample data showing people aren’t looking at this holistically. I was just talking at my company about the entry-level construction job—project engineer. Much of that role may be automated. Submittals, for example, take time and expertise; AI could produce them in minutes with the right inputs. Does that mean fewer project engineers? Our conversation was: how do we redefine the role so they still learn what they need to move up—project manager, project director, superintendent, whatever? The job won’t be the same, but it remains foundational. Same in communications: the entry-level comms role won’t be the same job in five years. Does the job go away—or do we rethink it? Smart organizations will rethink it—that’s a humanistic approach because we’re not dispensing with the role; we’re redefining it.
Neville Hobson:
It’s a big topic. I don’t disagree that some companies will shut their eyes to anything beyond “we use this to make money.” But the conversation—at the heart of what Chris is talking about—is helping organizations see the people. Language matters, too—how we talk about “replacing” versus “augmenting” can devalue human work.
Another argument from the brainstorm: human-centered talk often defaults to privileged voices and excludes marginalized groups. There’s a perception that there’s one version of AI—English, global north. What about the global south? Some countries have launched Spanish-language chatbots relevant to their populations; ChatGPT may not be the relevant tool for them. We should stimulate conversations in organizations: “Yes, but think about this as well.” That can create discord, but it’s necessary.
This idea is worth promoting: don’t devalue people. Put them first. Yes, aim for profit—but how do we help our people help us make that profit? People suffer in change; they’re often last in line when tech is deployed. Let’s bring empathy back into organizations. The landscape is changing at light speed—new capabilities, “pulse” updates to mobiles, etc. I think Chris Heuer’s offering could become a rallying concept. With influential voices like the Vatican and others globally, maybe it gathers steam.
Shel Holtz:
It could. My skepticism is about incentives. Leaders are obliged to produce maximum returns. How do we connect the dots so they see something in this change aligned with their goals? That’s what I want to see in the manifesto—most manifestos dwell on what’s wrong and not how to fix it.
Neville Hobson:
Right—so the H-Corp manifesto expected in November becomes the template to address those questions: how do we include X, Y, Z? I sensed a groundswell of willingness on the Sept. 17 call. It’s a small group; getting the word out may persuade others to get involved. You’ve got to start somewhere. This could be a rallying concept.
Shel Holtz:
I’ll predict that in November this will be a theme for one of our FIR episodes.
Neville Hobson:
Maybe we interview someone—perhaps Chris.
Shel Holtz:
Could be Chris.
Neville Hobson:
If this strikes a chord, go to the Humanizing AI Substack (link in show notes), read Chris’s post introducing the H-Corp manifesto, and see if you want to get involved. It’s open—share ideas and see what happens.
Shel Holtz:
One thread running through much of our coverage is how digital tech is reshaping organizational communication, minute by minute. Three new reports over the last couple of weeks are fascinating on their own; together they create a big picture communicators must grapple with.
First: a major new study of enterprise social media (internal platforms like Slack, Teams, Viva Engage). Researchers studied 99 organizations adopting Microsoft Viva Engage (which grew out of Yammer). Enterprise social media made communication networks denser, more connected, and more democratic. Employees didn’t just talk to the same people—they formed new ties, especially weak ties across teams that spark ideas. Leaders and employees connected more directly, and influence was more distributed. Viva Engage, unlike siloed Teams channels, enables more open conversations around broader themes. This change breaks down silos and fosters innovation—critical when hybrid/remote work can leave people isolated.
Second: Boston Consulting Group estimates more than 80% of corporate affairs work, including communication, could be augmented or even automated by AI. The biggest gains come when organizations redesign processes around AI—not just bolt it on. For communicators: think proactively not just about writing faster, but re-imagining workflows with AI in the mix.
Third: VoxEU points to a serious risk: as AI makes it easier and cheaper to produce plausible misinformation, the value of trusted, credible information goes up—externally and internally. If employees can’t tell what’s credible—about competitors, market conditions, or even their own company—their decision-making is compromised. If misinformation creeps into internal channels, it can spread quickly through the very networks making us more connected.
Put together: enterprise social media can make networks open and innovative, but they’re vulnerable if we don’t ensure accuracy and trust. Hovering over that is BCG’s reminder that AI will disrupt a huge portion of what communicators do. The challenge: if we don’t take responsibility for credibility and quality, AI will amplify misinformation and mistrust. The opportunity: use AI thoughtfully to improve connection and personalization while leaning into our role as stewards of trusted information. Connection without credibility is fragile; credibility without connection is limited. Our job is to deliver both.
Neville Hobson:
Big challenge. The VoxEU report on AI misinformation and trusted news stood out. One interesting finding: once misinformation was identified, people didn’t disengage—they consumed more. Treated individuals were more likely to maintain subscriptions months later. The report’s conclusion: when the threat of misinformation becomes salient, the value of credible news increases. How do you put that in place inside an organization?
Shel Holtz:
I remember my first corporate comms job at ARCO. Our weekly employee paper had bylines so employees knew who to call with story ideas. Bylines also establish credible sources—names employees learn to trust. As networks flood with information, people will gravitate to known credible voices. The same is true externally with content marketing: put a person behind the content so audiences recognize trustworthy outputs. We’ll need to build credibility with our reporters, thought leaders, and SMEs—internally and externally—so they become beacons of trust amid misinformation.
Neville Hobson:
VoxEU (focused on media) says if outlets maintain trust, the rise of synthetic content becomes an opportunity: as trust grows scarcer, its value rises, and audiences may be more willing to pay. Translate internally: employees won’t “pay,” but they will give attention to reliable, trustworthy writing—especially when the author is identified and credible. That seems like common sense.
Shel Holtz:
Agreed. Some employees don’t care what’s going on; they just do their job and go home. But if they’re overwhelmed with plausible-sounding contradictions, internal communications can become the trusted voice. People who didn’t pay attention before may start following channels and authors they’ve come to trust—if we consistently produce credible content.
Neville Hobson:
One line from VoxEU’s conclusion fits perfectly: the threshold to trustworthiness rises with the volume and sophistication of misinformation, meaning media outlets can’t stand still; they must continually invest in helping readers distinguish fact from fabrication, keeping pace with AI. Fits internally, too.
Shel Holtz:
Combine that with the other two reports: use enterprise social networks as channels for credible information and conversation, and use the BCG disruption to redefine our work so our time remains valuable even as 80% of tasks change.
Neville Hobson:
Okay, another buzzword: “work slop”—content that looks polished but is shallow or misleading, created with AI and dumped on colleagues to sort out. Harvard Business Review argued work slop is a major reason companies aren’t seeing ROI from AI; 40% of employees are dealing with it. But there’s a critique in Pivot to AI saying the data came from an unfiltered BetterUp survey—calling HBR’s article an unlabeled advertorial that shifts blame onto workers while pitching enterprise tools.
So two threads: “work slop” is a brilliant label for a real problem; but some coverage may itself be work slop. Questions: Is work slop a real productivity killer or just a catchy buzzword? What responsibility lies with leadership vs. employees? And how should we treat research that blurs into marketing?
Shel Holtz:
I think it’s real, though I don’t know that it’s as dire as painted. The first time I saw “work slop” was from Ethan Mollick on LinkedIn. He echoed Pivot to AI’s point: the term can shift blame onto employees told to “be more productive with AI” without leaders doing the hard work of rethinking processes or defining good use. Poor output becomes “AI’s fault”—that’s not leadership. For communicators, we should advocate responsible AI use from the top down, not just coach employees to cope.
Also, this is new-tech déjà vu. Remember desktop publishing? Suddenly every department cranked out a newsletter—because they could. It created information overload until companies set guidelines. Today, many orgs haven’t offered training, guidance, or frameworks for AI. People are experimenting—good!—but without prompt skills or evaluation skills, they’ll create work slop. We’ll see a lot of it until organizations get strategic about AI and define expectations and verification. We even did an episode on “verification” becoming a role—someone checking outputs for accuracy and credibility. We’ll see if that shakes out, but that’s where work slop comes from. I don’t think it’s a long-term problem; it will resolve like the 86 departmental newsletters did.
Neville Hobson:
How do we address the eruption of AI-generated content? Even if it isn’t outright wrong, it’s too much to read—hurting productivity.
Shel Holtz:
Organizations need a strategic approach. Our CEO often says there will be a day the switch flips; if you’re not ready, you’re irrelevant. The orgs allowing prodigious work slop haven’t reached—or acted on—that conclusion. They need governance, training, and clear “assist vs. automate” boundaries.
Neville Hobson:
Thanks very much, Dan—terrific report. TypePad caught my attention. I was on TypePad from 2004, moved to WordPress in 2006, kept TypePad as an archive until 2021. Interesting—and urgent—to hear it’s ending. Migration is easy except images; that’s not trivial. I know three people still on TypePad with no idea the door’s about to shut. Good callout.
Shel Holtz:
I was never a TypePad user, but many early influential blogs were there—Heather Armstrong’s Dooce, PostSecret, Freakonomics before it became a podcast. We’ve been doing this long enough to cover birth, life, and death.
Neville Hobson:
We have. Dan also mentioned Mastodon introducing commenting—probably a big deal. I’m not hugely active there. What do you think?
Shel Holtz:
I still have a Mastodon instance—interested to dig in. I was more intrigued by the Vimeo item. They’ve struggled to define a niche in YouTube’s world—often pitching private, high-quality business video hosting. I still get pitched. But one constant headache in internal comms is getting the right message to the right people. If you’ve ever sent a company-wide update because you weren’t sure who needed it—or spent hours hunting down the right list—you know the pain.
That’s why a research paper from Amazon’s Reliability and Maintenance Engineering team caught my eye: “An explainable natural language framework for identifying and notifying target audiences in enterprise communication.” In plain terms: a system that lets you ask in natural language who should get your message, then the AI not only finds the audience but explains how it got there.
Example: “Reach all maintenance technicians working with VendorX’s conveyor belts at European sites.” The system parses that, queries a knowledge graph of equipment, people, and locations, and returns the audience—with reasoning (“here’s how we matched VendorX… here’s how we identified European facilities…”). Explainability is critical; in safety contexts you can’t trust a black box.
Implications: we struggle with over- and under-communication. We over-broadcast because we don’t trust lists; we miss people who need the info. A framework like this could make targeting as easy as writing a sentence—with transparency you can trust. It mirrors marketing’s move from “Hi, {FirstName}” to real context-aware personalization. Employees aren’t different from customers: they don’t want spam; they want relevant comms.
Challenges: privacy concerns about graphing people and work, the need to validate reasoning, and data quality (titles, records). But it’s a blueprint for rethinking audience targeting—imagine HR or IT change comms targeting precisely with explainability. It’s research, not a product yet, but communicators should watch this closely.
Neville Hobson:
Good explanation. I couldn’t find a simple one online—Amazon’s site doesn’t surface it well. Amusingly, an AI Overview explained it best, which is a good illustration of why traditional search is fading. My question: is this live at Amazon?
Shel Holtz:
I don’t think so—it’s a framework for consideration. Presumably they’d build it for Amazon first, then maybe market or license it. If you’ve worked in internal comms, you know targeting is hard; the info you need often isn’t accessible. This gives you the ability to do it—and verify it. I can’t wait to try it someday.
Neville Hobson:
Calendar marker set. Share that AI Overview with me—I’ll screenshot it.
Shel Holtz:
Copy and paste works, too.
Neville Hobson:
In recent episodes we’ve explored how the press release keeps reinventing itself. Far from dead, it moved from media distribution to SEO—and now, according to Sarah Evans (partner and head of PR at Zen Media), into a generative-AI visibility engine. In a long read, she describes testing a press release about Zen Media acquiring Optimum7, distributed via GlobeNewswire, then tracking how it was picked up not just by news outlets but by AI systems like ChatGPT, Perplexity, and Meta AI. Within six hours, ChatGPT cited it 40 times; Meta’s external agents 17 times; Perplexity Bot and Applebot twice. Evans said there were 61 documented AI mentions in total in that period.
Implications: press releases aren’t just reaching journalists or Google—they’re feeding AI systems people use to ask questions and make decisions. The key metric becomes: is it retrievable when AI is asked a critical question in our space?
We’ve covered this angle a few times—from “Is PR dead (again)?” to the reinvention of releases for algorithms—and now this: the release as a tool for persistent retrievability and authority in the age of AI. If AI engines are the new gatekeepers, how should communicators rethink writing and measurement? What do you think, Shel?
Shel Holtz:
I’m glad you’re citing Sarah Evans—she’s terrific. We should invite her on the interview show. In another post—“10 PR myths I’m seeing”—she debunks “Press releases don’t matter.” She says they matter more than ever. We’re seeing early results with releases averaging about 285 citations in ChatGPT within 30 days. That suggests LLMs treat press releases as credible sources—especially when picked up in reputable places.
She also talks structured information. Gini Dietrich recently suggested having a page on your site not linked anywhere—meant for AI crawlers—with structured/markdown versions of the content so AI can better understand and apply it. Bottom line: press releases aren’t going anywhere. Every time someone proclaims them dead, they persist. (Side rant: embargoes aren’t real unless we agreed in advance.)
Neville Hobson:
I ranted about embargoes recently too. One question: what does retrievability mean for communicators? If AI engines, not journalists or search engines, arbitrate visibility, how do PR teams measure success differently? Are AI citations more valuable than traditional pickups?
Shel Holtz:
Both are valuable. Search has declined, but not to zero—not even to 50%. People still search. Some haven’t adopted AI for search; some queries are better served by ten links than an overview. (When we were in Rome, I searched for restaurants open before 7 p.m.—classic Google links job.) Sarah Evans’s myth list also says “Choose traditional or modern PR” is false—the strongest strategies use a dual pathway. As Mitch Joel says, “and,” not “instead of.”
Neville Hobson:
Worth reading Sarah’s Substack—and the link you’ll put in the notes—to make you think.
Shel Holtz:
Absolutely. I’ll probably be doing a press release in the next week or two—can’t say more, but it’s coming. One more debate: how people actually use AI. Ethan Mollick argues people lean on AI for higher-level cognitive tasks—framing, sense-making, brainstorming—rather than just automating grunt work. Recent usage studies from OpenAI and Anthropic offer fresh data.
OpenAI’s analysis of ChatGPT usage shows augmentation—writing, editing, summarizing, brainstorming, decision support. “Asking” (decision support) has become the largest slice—aligning with “thinking partner.” Anthropic paints a different enterprise picture for Claude: businesses use it chiefly to automate workflows, coding, math-heavy tasks, document processing, and reporting pipelines. Automation exceeds augmentation; some quantify ~three-quarters of enterprise use as automation patterns.
Zooming out, Forbes’s overview with OpenAI, Anthropic, and Ipsos notes adoption is broadening fast, trust is uneven, and behaviors vary by context. ZDNet frames it succinctly: ChatGPT is mostly used for writing and decision support (often non-work or para-work tasks), while Claude skews toward structured enterprise automation—coding and back-office flows.
So where does that leave Mollick’s claim? Both realities are true depending on the user and context. Among general knowledge workers, AI is a thinking companion; among engineering/operations teams and API-wired apps, AI acts as an automation substrate.
Implications for communicators:
AI is in the room at the idea stage—become editors, synthesizers, and standard-setters.
Automation is marching into comms-adjacent workflows—govern quality, provenance, and accountability.
Don’t pick a side; design for both—declare the “assist vs. automate” boundary, instrument the pipeline with checks and tags, build a “thinking partner” prompt bench, and mind the labor story by narrating changes transparently.
Neville Hobson:
Good advice. ZDNet’s split between personal and non-personal use was interesting. I’m a mixed user—lots of research that’s kind of work-related. I use ChatGPT mostly; not Claude lately. One note: I used ChatGPT for coding when I rebuilt my website on Ghost—editing theme templates with Handlebars. ChatGPT was astonishingly good—especially after GPT-5—at brainstorming workarounds and generating CSS/JS tweaks that worked perfectly on first publish. Claude also pinpointed issues following theme dev instructions. For a non-coder, that was huge confidence. These tools were brilliant alongside Docker and VS Code. I’m impressed.
Shel Holtz:
No question ChatGPT does very well with code; all frontier LLMs do. Claude currently tops some benchmarks (SWE-Bench, HumanEval, etc.) and is marketed heavily to developers, with strong APIs and tool integrations. OpenAI pushes ChatGPT more broadly—consumer and enterprise—so you see “what should I wear tonight?” and recipes alongside enterprise tasks. I’ve read that Gen Z uses AI to make basic day-to-day decisions—fascinating.
Neville Hobson:
The report says ChatGPT usage hit 700 million weekly users as of July. Growth is relatively faster in low- and middle-income countries. Early users were ~80% men; now about 48%, with more active users having typically feminine first names (their method). Useful metrics—but what does it mean to the average communicator? Hopefully they don’t walk away thinking “ChatGPT is only for coding.”
Shel Holtz:
Right—the point is not either/or. There are two valid use modes—collaborative and automated. Provide resources, tools, policies, guardrails, and governance so people can use both modes effectively. You can’t have a Wild West; you need standards you can support. But beware of swinging too far. I spoke to a company restricting staff to an internal AI that can’t do what NotebookLM does—hamstringing themselves. Organizations need to be pragmatic.
And that will wrap up episode 482, our long-form episode for September 2025.
Neville Hobson:
Market conditions will impact that approach, I bet. Okay.
Shel Holtz:
Right now we’re planning to record our long-form October episode on Saturday the 25th or Sunday the 26th—depends on Neville’s schedule. Either works for me. That episode will drop Monday, October 27. We’ll have short mid-week episodes in between, maybe even an interview—we’re lining up one or two. Until then, that will be a 30 for this episode of For Immediate Release.
The post FIR #482: What Will It Take to Stop the Slop? appeared first on FIR Podcast Network.
139 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.