The Village Global podcast takes you inside the world of venture capital and technology, featuring enlightening interviews with entrepreneurs, investors and tech industry leaders. Learn more at www.villageglobal.vc.
…
continue reading
Content provided by Larry Swanson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Larry Swanson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Jeff Eaton: Content Observability in Complex Systems – Episode 212
MP3•Episode home
Manage episode 478299841 series 1927771
Content provided by Larry Swanson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Larry Swanson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Jeff Eaton Modern content systems are complex and abstract, presenting problems for managers who want to understand how their content is performing. At Autogram, Jeff Eaton and Karen McGrane have developed a content observability framework to address this complexity. Their framework evaluates the composition, quality, health, and effectiveness of content programs to help enterprises measure the return on their content investment. We talked about: his work at Autogram, the consultancy that he and Karen McGrane operate his high-level take on the notion of content observability how the growing complexity of content systems drives the need for content observability how content observability connects with content strategy the inadequacy of current analytics and other tooling to permit true content observations the role of content intent in discerning content performance the content ecosystem insight that led to their explorations in content observability the four pillars of Autogram's content observability framework: composition - the make-up of your content assets quality - organizational standards, regulatory compliance, voice and tone, etc. health - is the system working (regardless of the quality of the elements in it) effectiveness - is the content achieving the intended outcomes the ability within the framework to account for different content intentions in order to evaluate the ROI of the whole system some of the inspiration for his content observability work the talk that he and Karen are giving on content observability at the 2025 IA Conference Jeff's bio Jeff helps large organizations understand, model, and manage their content. Whether he’s fixing problems with CMS architecture or editorial workflow, his solutions sit in the overlap between design, communications, and technology. Connect with Jeff and Autogram online LinkedIn Bluesky Eaton, FYI Autogram Video Here’s the video version of our conversation: https://youtu.be/HMm5UhDKQiY Podcast intro transcript This is the Content Strategy Insights podcast, episode number 212. As content systems have become more complicated and abstract, understanding the effectiveness of your content efforts has become a real challenge. At Autogram, Jeff Eaton and his business partner Karen McGrane routinely work on very complex content projects. To help their clients understand the impact of their content, they have developed an observability framework that measures the composition, quality, health, and effectiveness of their content programs. Interview transcript Larry: Hi everyone. Welcome to episode number 212 of the Content Strategy Insights podcast. I am really delighted to welcome back to the show, Jeff Eaton. Jeff, I think is one of the few three-time guests I've had. Maybe Preston So was the other. That's right. Might have had one other, but welcome back, Jeff. Oh, and for folks who don't know, Jeff is a partner at Autogram, the legendary consultancy. And he's also probably, it's safe to say the most famous, infamous, renowned content nerd out there. Welcome back, Jeff. Jeff: Well, it's a pleasure to be here. Always fun to come and talk shop and exchange news about what wild stuff we've all been working on and thinking about. It's great to be here. Larry: No, and one of the things, the reason I wanted to get back on, we were talking a while back a few weeks ago about this notion of content observability, which immediately to me was like, yeah, thank God. Let's do that. And thank God Jeff is thinking about it. But I think a lot of folks, it's a pretty well-known concept in the tech world, especially the dev ops and those kinds of worlds. But I think to a lot of our listeners, it might not be a familiar concept, so can you walk through the notion of observability and its application that you see it in the content world? Jeff: There are a lot of different tactical things that we tend to do in the content world to measure what we've got or what's going on, whether it's auditing, inventories, analytics, stuff like that. But the concept of observability, that's a label that became popular in the software engineering world, particularly as organizations started depending more and more on large numbers of different systems. Like, oh, well, our company's web presence isn't just the server that runs the website, but in fact it's, oh, well, these three databases, these four services that we subscribed to, that our servers talk to. And there's the front-end stuff and there's this and this, and this. Increasingly, the web and digital infrastructure for companies has gotten complicated enough that it's really a system of different moving parts, each of which have their own health checks and monitors and stuff like that. Jeff: And observability really emerged as an umbrella term for this idea of building a system that you can monitor in order to see, well, what are the different moving parts of the system? How healthy is each one? Is something going wrong? All of that is with an eye towards giving yourself good feedback mechanisms for the operation and the health of the system as a whole, and it's individual parts so that you're not left to saying, well, something's wrong, somebody should go quote "debug" it or whatever. It's with a system of that complexity and as those systems have more and more central to a lot of businesses, that proactive approach to saying, well, how do we keep an eye on how everything's doing and put that as a part of our day-to-day processes for knowing what's an emergency or when we should change the oil on the car, so to speak, versus just waiting for it to break down on the side of the road. Jeff: That oversimplifies things a lot, but that has been a significant theme in the software engineering world for quite some time. And it's both at a strategic level like, what services do we need to monitor everything? But also at a just detailed tactical level questions like, well, how do we build those individual pieces such that we can monitor them effectively? That doesn't work if it's just a black box that either works or fails and who knows? The idea of being able to say, oh, well, we should be able to reach in and say, how is this piece working? What's it doing? Is it slower than usual? Does that matter? Yes or no? Those are the kinds of questions that software observability needs to be able to concern itself with. And in the world of content with digital publishing, there's a lot of overlap. There's adjacency because that kind of complex system that we just talked about is how a lot of digital content gets published. Jeff: It's going through systems like that, but the idea of the content itself, having that kind of observability framework isn't something that has really been there. And I think as we've been feeling the pain of that for a long time on different fronts, but haven't necessarily had a word to put to it. I can't remember where I picked it up, but a while ago I was watching someone talk about the concept of strategy and how all strategy is a theory. It's a story about what we believe is going to happen if we do X and why we think Y is going to happen. Because it's a story we have about, okay, if we do these things, here's how the ecosystem works, or here's how our customer relationships works, so if we do A, then because B and C, we're going to get D on the other side of that. And you can do all sorts of stuff from that but at its heart a strategy is that kind of a story you have about here's what's going to happen, the cause and effect. Jeff: The problem is, is for that to work, you need to be able to check back and figure out, okay, so was that story true? Was that an accurate view of the world? Because if it's not, we're doing the wrong stuff, we're making the wrong content, we're publishing it in the wrong channels, we're not getting it to the right people, or they're liking it, but it's not causing the behaviors that we had hoped it would or grabbing them in the way that we anticipated. And oftentimes it's a long, long feedback cycle before we really see whether our theories are true or not. And even then it's often with rough proxy metrics that aren't really telling us whether our theory of the world is true or not. Larry: That's what I was going to ask about because all we've ever had as content people is page-level delivery metrics, and it's like, oh, for crying out loud, but what you just said, I want to- Jeff: Views, how many views and what's that bounce rate. Larry: Like age views. But I love the way you described this as like, hey, does this story check out? And I'm picturing a magazine fact-checker looking at your strategic narrative and going, that doesn't really check out. And so that gets into- Jeff: We thought long form video content was going to really be a killer for us. How did that work out? Did that do what we thought it would? Larry: And you can't do that with Google Analytics, right? Jeff: Well, Google Analytics is incredibly flexible. If you take the time to go in and set up all kinds of custom triggers and events that you're tracking and stuff like that, you can track all sorts of things. But out of box, Google Analytics is set up to track page views and bounce rate. And if you've got commerce stuff wired up, you can track conversions in very specific ways. But those are very, very tailored kinds of conceptual questions. Like bounce rate is basically when someone comes to the site, do they stick around? What are the odds that they'll come look at what we've got and leave? And this was the germ for us at Autogram of what I ended up developing into this idea of a bigger concept of observability. The classic bounce rate question was, well, what about wayfinding content? The stuff that you make, if you build a large repository of information,
…
continue reading
134 episodes
MP3•Episode home
Manage episode 478299841 series 1927771
Content provided by Larry Swanson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Larry Swanson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Jeff Eaton Modern content systems are complex and abstract, presenting problems for managers who want to understand how their content is performing. At Autogram, Jeff Eaton and Karen McGrane have developed a content observability framework to address this complexity. Their framework evaluates the composition, quality, health, and effectiveness of content programs to help enterprises measure the return on their content investment. We talked about: his work at Autogram, the consultancy that he and Karen McGrane operate his high-level take on the notion of content observability how the growing complexity of content systems drives the need for content observability how content observability connects with content strategy the inadequacy of current analytics and other tooling to permit true content observations the role of content intent in discerning content performance the content ecosystem insight that led to their explorations in content observability the four pillars of Autogram's content observability framework: composition - the make-up of your content assets quality - organizational standards, regulatory compliance, voice and tone, etc. health - is the system working (regardless of the quality of the elements in it) effectiveness - is the content achieving the intended outcomes the ability within the framework to account for different content intentions in order to evaluate the ROI of the whole system some of the inspiration for his content observability work the talk that he and Karen are giving on content observability at the 2025 IA Conference Jeff's bio Jeff helps large organizations understand, model, and manage their content. Whether he’s fixing problems with CMS architecture or editorial workflow, his solutions sit in the overlap between design, communications, and technology. Connect with Jeff and Autogram online LinkedIn Bluesky Eaton, FYI Autogram Video Here’s the video version of our conversation: https://youtu.be/HMm5UhDKQiY Podcast intro transcript This is the Content Strategy Insights podcast, episode number 212. As content systems have become more complicated and abstract, understanding the effectiveness of your content efforts has become a real challenge. At Autogram, Jeff Eaton and his business partner Karen McGrane routinely work on very complex content projects. To help their clients understand the impact of their content, they have developed an observability framework that measures the composition, quality, health, and effectiveness of their content programs. Interview transcript Larry: Hi everyone. Welcome to episode number 212 of the Content Strategy Insights podcast. I am really delighted to welcome back to the show, Jeff Eaton. Jeff, I think is one of the few three-time guests I've had. Maybe Preston So was the other. That's right. Might have had one other, but welcome back, Jeff. Oh, and for folks who don't know, Jeff is a partner at Autogram, the legendary consultancy. And he's also probably, it's safe to say the most famous, infamous, renowned content nerd out there. Welcome back, Jeff. Jeff: Well, it's a pleasure to be here. Always fun to come and talk shop and exchange news about what wild stuff we've all been working on and thinking about. It's great to be here. Larry: No, and one of the things, the reason I wanted to get back on, we were talking a while back a few weeks ago about this notion of content observability, which immediately to me was like, yeah, thank God. Let's do that. And thank God Jeff is thinking about it. But I think a lot of folks, it's a pretty well-known concept in the tech world, especially the dev ops and those kinds of worlds. But I think to a lot of our listeners, it might not be a familiar concept, so can you walk through the notion of observability and its application that you see it in the content world? Jeff: There are a lot of different tactical things that we tend to do in the content world to measure what we've got or what's going on, whether it's auditing, inventories, analytics, stuff like that. But the concept of observability, that's a label that became popular in the software engineering world, particularly as organizations started depending more and more on large numbers of different systems. Like, oh, well, our company's web presence isn't just the server that runs the website, but in fact it's, oh, well, these three databases, these four services that we subscribed to, that our servers talk to. And there's the front-end stuff and there's this and this, and this. Increasingly, the web and digital infrastructure for companies has gotten complicated enough that it's really a system of different moving parts, each of which have their own health checks and monitors and stuff like that. Jeff: And observability really emerged as an umbrella term for this idea of building a system that you can monitor in order to see, well, what are the different moving parts of the system? How healthy is each one? Is something going wrong? All of that is with an eye towards giving yourself good feedback mechanisms for the operation and the health of the system as a whole, and it's individual parts so that you're not left to saying, well, something's wrong, somebody should go quote "debug" it or whatever. It's with a system of that complexity and as those systems have more and more central to a lot of businesses, that proactive approach to saying, well, how do we keep an eye on how everything's doing and put that as a part of our day-to-day processes for knowing what's an emergency or when we should change the oil on the car, so to speak, versus just waiting for it to break down on the side of the road. Jeff: That oversimplifies things a lot, but that has been a significant theme in the software engineering world for quite some time. And it's both at a strategic level like, what services do we need to monitor everything? But also at a just detailed tactical level questions like, well, how do we build those individual pieces such that we can monitor them effectively? That doesn't work if it's just a black box that either works or fails and who knows? The idea of being able to say, oh, well, we should be able to reach in and say, how is this piece working? What's it doing? Is it slower than usual? Does that matter? Yes or no? Those are the kinds of questions that software observability needs to be able to concern itself with. And in the world of content with digital publishing, there's a lot of overlap. There's adjacency because that kind of complex system that we just talked about is how a lot of digital content gets published. Jeff: It's going through systems like that, but the idea of the content itself, having that kind of observability framework isn't something that has really been there. And I think as we've been feeling the pain of that for a long time on different fronts, but haven't necessarily had a word to put to it. I can't remember where I picked it up, but a while ago I was watching someone talk about the concept of strategy and how all strategy is a theory. It's a story about what we believe is going to happen if we do X and why we think Y is going to happen. Because it's a story we have about, okay, if we do these things, here's how the ecosystem works, or here's how our customer relationships works, so if we do A, then because B and C, we're going to get D on the other side of that. And you can do all sorts of stuff from that but at its heart a strategy is that kind of a story you have about here's what's going to happen, the cause and effect. Jeff: The problem is, is for that to work, you need to be able to check back and figure out, okay, so was that story true? Was that an accurate view of the world? Because if it's not, we're doing the wrong stuff, we're making the wrong content, we're publishing it in the wrong channels, we're not getting it to the right people, or they're liking it, but it's not causing the behaviors that we had hoped it would or grabbing them in the way that we anticipated. And oftentimes it's a long, long feedback cycle before we really see whether our theories are true or not. And even then it's often with rough proxy metrics that aren't really telling us whether our theory of the world is true or not. Larry: That's what I was going to ask about because all we've ever had as content people is page-level delivery metrics, and it's like, oh, for crying out loud, but what you just said, I want to- Jeff: Views, how many views and what's that bounce rate. Larry: Like age views. But I love the way you described this as like, hey, does this story check out? And I'm picturing a magazine fact-checker looking at your strategic narrative and going, that doesn't really check out. And so that gets into- Jeff: We thought long form video content was going to really be a killer for us. How did that work out? Did that do what we thought it would? Larry: And you can't do that with Google Analytics, right? Jeff: Well, Google Analytics is incredibly flexible. If you take the time to go in and set up all kinds of custom triggers and events that you're tracking and stuff like that, you can track all sorts of things. But out of box, Google Analytics is set up to track page views and bounce rate. And if you've got commerce stuff wired up, you can track conversions in very specific ways. But those are very, very tailored kinds of conceptual questions. Like bounce rate is basically when someone comes to the site, do they stick around? What are the odds that they'll come look at what we've got and leave? And this was the germ for us at Autogram of what I ended up developing into this idea of a bigger concept of observability. The classic bounce rate question was, well, what about wayfinding content? The stuff that you make, if you build a large repository of information,
…
continue reading
134 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.