Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17)
Manage episode 504153910 series 3394253
“If you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are.”
–Brian Kropp

About Brian Kropp
Brian Kropp is President of Growth at World 50 Group. Previous roles include Managing Director at Accenture, Chief of HR Research at Gartner and Practice Leader at CEB. His work has been extensively featured in the media, including in Washington Post, NPR, Harvard Business Review, and Quartz.
What you will learn
Driving organizational performance through AI adoption
Understanding executive expectations versus actual results in AI performance impact
Strategies for creating effective AI adoption incentives within organizations
The importance of designing organizations for AI integration with a focus on risk management
Middle management’s evolving role in AI-rich environments
Redefining organizational structures to support AI and humans in tandem
Building a culture that encourages AI experimentation
Empowering leaders to drive AI adoption through innovative practices
Leveraging employees who are native to AI to assist in the learning process for leaders
Learning from case studies and studies of successful AI integration
Episode Resources
Transcript
Ross Dawson: Brian, it’s wonderful to have you on the show.
Brian Kropp: Thanks for having me, Ross. Really appreciate it.
Ross: So you’ve been doing a lot of work for a long time in driving organizational performance. These are perennials, but there’s this little thing called AI, which has come along lately, which is changing.
Brian: You might have heard of it somewhere. I’m not sure if you’ve been alive or awake for the last couple of years, but you might have heard about it.
Ross: Yeah, so we were just chatting before, and you were saying the pretty obvious thing, okay, got AI. Well, it’s only useful when it starts to be used. We need to drive the adoption. These are humans, humans who are using AI and working together to drive the performance of the organization. So love to just hear a big frame of what you’re seeing in how it is we drive the useful use of AI in organizations.
Brian: I think a good starting point is actually to try to take a step back and understand what is the expectation that executive senior leaders have about the benefit of these sorts of tools.
Now, to be honest, nobody knows exactly what the final benefit is going to be. There is definitely guesswork around. There are different people with different expectations and all sorts of different viewpoints on them, so the exact numbers are a little bit fuzzy at best in terms of the estimates of what performance improvements we will actually see.
But when you think about it, at least at kind of orders of magnitude, there are studies that have come out. There’s one recently from Morgan Stanley that talked about their expectation around a 40 to 50% improvement in organizational performance, defined as revenue and margin improvements from the use of AI tools.
So that’s a really big number. It’s a very big number.
When you do analysis of earnings calls from CEOs and when they’re pressed on what their expectation is, those numbers range between 20 and 30%. That’s still a really big number, and this is across the next couple of years, so it’s a timeframe.
What’s fascinating is that when you survey line executives, senior executives—so think like vice president, people three layers down from the CEO—and you look at some of the actual results that have been achieved so far, it’s in that single digits range.
So the challenge that’s out there, there’s a frontier that says 50, CEOs say 30, the actualized is, call it five. And those numbers, plus or minus a little bit, are in that range.
And so there’s enormous pressure on executives in businesses to actually drive adoption of these tools. Not necessarily to get to 50—I think that’s probably unrealistic, at least in the next kind of planning horizon—but to get from five to 10, from five to 15.
Because there are billions of dollars of investments that companies are making in these tools. There are all sorts of startups that they’re buying. There are all sorts of investments that they’re making.
And if those executives don’t start to show returns, the CFO is going to come knocking on the door and say, “Hey, you wrote a check for $50 million and the business seems kind of the same. What’s up with that?” There’s enormous pressure on them to make that happen.
So if you’re, as an executive, not thinking hard about how you’re actually going to drive the adoption of these tools, you’re certainly not going to get the cost savings that are real potential opportunities from using these tools. And you will absolutely not get the breakthrough performance that your CEO and the investment community are expecting from use of these tools.
So there’s an absolute imperative that executives figure out the adoption problem, because right now the technology, I think, is more than good enough to achieve some of these savings. But at the end of the day, it’s really an adoption, use, application problem.
It’s not a “Can we afford to buy it or not” problem. It’s “We can afford to buy it. It’s available. We have to use it as executives to actually achieve some sort of cost savings or revenue improvements.” And that, I think, is the size of the problem that executives are struggling with right now.
Ross: Yeah. Well, the thing is, the old adage says you can take a horse to water, but you can’t make it drink. And in an organizational context, again, I think the drive to use AI in organizations needs to be intrinsic, as in people need to want to do it. They can see that it’s part of the job. They want to learn. It gives them more possibilities and so on.
And there’s a massive divergence where I think there are some organizations where it truly is now part of the culture. You try things. You tell people you’re using it. You share prompts and so on. That’s probably the minority, but they absolutely exist.
In many organizations, it’s like, “I hate it. I’m not going to tell anybody I’m using it if I am using it.” And top-down, telling people to use it is not going to get there.
Brian: It’s funny, just as a quick side note about not telling people they’re using it. There’s a study that just came out. I think it was from ChatGPT, I can’t remember those folks. But one of the things that they were looking at was, are teachers using generative AI tools to grade papers?
And so the numbers were small, like seven or eight percent or something like that, less than 10%. But it just struck me as really funny that teachers have spent all this time saying, “Don’t use generative AI tools to write your papers,” but some are now starting to use generative AI tools to grade those papers.
So it’s just a little funny, the whole don’t use it, use it, not use it, don’t tell people you’re using it. I think those norms and the use cases will evolve in all sorts of places.
Ross: So you have a bit of a high-level framework, I believe, for how it is we think through driving adoption.
Brian: Yes. There are three major areas that I think are really important.
One, you have to create the right incentive structure. And that, to your point, is both intrinsic incentives. You have to create reasons for people to use it. In a lot of cases, there’s some fear over using it—“I don’t know how,” “Am I going to eliminate my own job?” Those sorts of things. So you have to create an incentive structure to use it.
Two, you have to think about how the organization is designed. Organizations from a risk aversion perspective, from a checks-and-balances perspective, from who gets to say no to stuff, from a willingness-to-experiment perspective, are designed to minimize risk in many cases.
And in order to really drive AI adoption, there is risk that’s involved. It’s a different way of doing things that will disrupt the old workflows that exist in the organization. So you have to really think hard about what you do from an org design perspective to make that happen.
And then three, you could have the right incentives in place, you could have the right structure in place, but leaders need to actually create the environment where adoption occurs. One of the great ironies here is that the minority of leaders—there was a Gartner study that came out just a little bit ago—showed that, on average, only about 15% of leaders actually feel comfortable using generative AI tools. And that’s the ones that say they feel comfortable doing it, which might even be a little bit of an overestimate.
So how do you work with leaders to actually create an environment where leaders encourage the adoption and are supportive of the adoption, beyond “You should go use some AI tools”?
Those are the three categories that companies and executives need to be thinking about in order to get from what is now relatively low levels of adoption at a lot of organizations to even medium levels of adoption, to close that gap between the 50% and 5% around the delta in expectations that people have.
Ross: So in particular, let’s go through those one by one. I’m particularly focused on the organizational design piece myself. For leaders, I think we can get to some solutions there. But let’s start with the incentives. I’d love to hear any specifics around what you have seen that works, that doesn’t work, or any suggestions or ideas. How do you then design and give that drive for people to say, “Yes, I want to use it”?
Brian: One of the things that’s really fascinating to me about getting people the drive to use it is that people often don’t know where, when, and how to use it.
So from an incentive structure, what a lot of companies do—what the average company will do—is say, “Well, we’re going to give you a goal to experiment with using generative AI tools, and you’ll just have a goal to try to do something.” But that comes without specificity around where, what, or when.
There’s one organization I’m working with, a manufacturing company, and what they’re doing right now is, rather than saying broadly, “You should be using these tools,” they actually go through a really specific process. They start by asking: what are the business problems that are there? What are the customer pain points in particular?
That’s where they start. They say, “What are the biggest friction points in our organization between one employee and another employee, or the friction points between the customer and the organization?”
So they first design and understand what those pain points are.
The second thing they actually do is not give goals for people to experiment more broadly. They give a goal for an output change that needs to occur. That output change could be faster time to customers, response time between employees, decrease in paperwork, or decrease in emails—some sort of tangible output that is measured within that.
And what’s interesting is they don’t measure the inputs or how hard it is to change that output. And that’s really important, because early on with incentives, we too often think about what is the ROI that we’re getting from this particular change. Right now, we don’t know how easy or hard it’s going to be to make these changes.
But what we know with certainty is if we don’t make a change, there’s no return on that investment. Small investment, big investment—if there’s no return, it’s zero. So first they’re identifying the places where they can get the return, and then later they’ll figure out what is the right way to optimize it.
So from an incentive structure, what they’re incentivizing—and they’re giving cash and real money associated with it, real hard financial outcomes—is: one, have you identified the most important pain points? two, have you conducted experiments that have improved the outcome, even if it is more expensive to do today?
That problem can be solved later. The more important problem is to focus on the places where there’s actually a return, and give incentives for people that can impact the return, not just people that have gotten an ROI measure.
And that is a fundamentally different approach than a finance perspective, because the finance question is, “Well, what’s the ROI?” Wrong question to ask right now. The right question is, “Where is the return?” and set people to get a return, not a return on an investment.
Ross: That sounds very, very promising. So I want to just get specific here. In terms of surfacing those pain points, is that done in a workshop format? Do they get groups of people across the frontline to workshop and create lists of these pain points, which are then listed, and then disseminated, and say, “Okay, now you can go out and choose a pain point where you can come up with some ideas on how to improve that”?
Brian: Yeah. So the way that this particular company does it, it’s part of their high-potential program. One of the things they’ve got is a high-potential program they’re always trying to figure out. And a lot of companies are working with this: where can those high potentials actually have a really big impact across the organization and start to develop an enterprise mindset?
So they’ve run a series of workshops with their high potentials to identify what those pain points are.
Now, the inputs to those workshops include surveys from employees, surveys from customers, operations people who come through and chart out what takes time from one spot to another spot—a variety of inputs. But you want to have a quantitative measure associated with those inputs, because at the end of the day, you have to show that that pain point is less of a pain point, that speed is a little bit faster. So you need to have some way to get to a quantitative measure of it.
Now, what they did is, once they workshopped that and got to a list, their original list was about 40 different spots. What a lot of companies are doing is saying, “Well, here are the pain points, go work on these 40 different things.” And what invariably happens is you get a little bit of work across all of them, but it peters out because there’s not enough momentum and energy behind them.
Once they got to those 40, they actually narrowed it down through a voting process amongst their high potentials to about five that are there. And those are the five that they shared with the broader organization.
And then what they’ve done is each of those groups of high potentials, about four or five per team, actually lead tiger teams across the company to focus on driving those pain points and trying to drive resolution around them.
So I don’t believe that the approach of “plant 1000 flowers and something good will happen” plays out. Every once in a while, sure, but it rarely plays out because these significant changes require significant effort. And as soon as you plant 1000 flowers, you can’t put enough effort against any of them to really work through the difficult, hard parts that are associated with it.
So pick the five spots that are the real pain points for customers, employees, or in your process. Then incent people to get a return on them—not a return on investment on them, but a return on them. And then you can start to reward people for just driving a return around the things that actually will help the organization get better.
Ross: Yeah, it sounds really solid. And I guess to the point about the more broad initiative, Johnson & Johnson literally called their AI program “Let 1000 Flowers Bloom.” And then they consolidated later to 100. But that’s Johnson & Johnson. Not everybody’s a J&J. Depending on size and capability, 1000 issues might not be the right way to start.
Brian: They did rationalize down, yeah. Once they started to get some ideas, they rationalized down to a smaller list.
Ross: I do think they made the comment themselves that they needed to do the broader thing before being able to think. They couldn’t get to the 100 ones which were high value without having done some experimentation, and that is the learning process itself. And it gets people involved.
So I’d love to move on to the organizational design piece. That’s a special favorite topic of mine. So first of all, big picture, what’s the process? Okay, we have an organizational design. AI is going to change it. We’re moving to a humans-plus-AI workforce and workflows. So what’s the process of redesigning that organization? And what are any examples of that?
Brian: One of the first things to realize is AI can be very threatening to significant parts of the organization that are well established. So here are a couple of things that we know, with a lot of uncertainty.
AI will create more cost-effective processes across organizations that will have impacts on decreasing headcount, in some cases, for sure. There are other companies—your competitors—that are coming up with new ideas that will lower costs of providing the same services that you provide.
However, the way that organizations are designed, in many ways, is to protect the parts of the business that are already successful, driving revenue, driving margin. And those parts of the business tend to be so big that they dominate small new parts of the business.
Because you find yourself in these situations where it’s like, yes, AI is the future, but today it’s big business unit A. Now, five years from now, that’s not going to be the case. But the power sits in big business unit A, and the resources get sucked up there. The innovation gets shut down in other places because it’s a threat to the big business units that are there.
And I get that, because you still have to hit a quarterly number. You can’t just put the business on pause for a couple of years while you figure out the new, innovative way of doing things.
So the challenge that organizations have, from an org design perspective, I believe, or one of them at least, is: how do you continue to get revenue and margin from the businesses that are the cash cows of the business, but not have them squash the future part of the business, which is the AI components?
If you slowly layer in new AI technologies, you slowly get improvements. One of the interesting things in a study that came out a little bit ago was the speed at which companies can operate. Large companies, on average, take nine months to go from idea to implementation. Smaller companies, it takes three months. My guess is in even smaller companies, it probably takes 30 days to go from idea to implementation of an AI pilot.
Ross: This was the MIT Nanda study.
Brian: Correct, yep. And the people that had a big reaction to 95% of companies haven’t seen results from what they’re doing that’s real. And lots of questions within that.
But the speed one, the clock speed one, is really interesting to me. Because if you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are. If you’re a big, large company, and it takes you nine months to go from idea to implementation, and your small, more nimble competitor is doing it in a month or two, that gives them seven, eight months of lead time to capture market share from you, because you’re big and slow.
So from an org design perspective, what I believe is the most effective thing—and we’re seeing companies do this—when General Motors launched their electric vehicles division, as an example of how this played out at scale.
What companies are doing is creating small, separate business units whose job it is to attack their own business unit and create the products and services that are designed to attack their own business unit. You almost have to do it that way. You almost have to create an adversarial organization design. Because if you’re not doing it to yourself, someone else is doing it to you.
Ross: That’s more a business model structure. That’s a classic example of innovation, a separate unit to cannibalize yourself. But that doesn’t change the design of the existing organization. It creates a new unit, which is small and which cannot necessarily scale as fast. And it may have a very innovative organizational structure to be able to do that, but that doesn’t change the design of the existing organization.
Brian: Yeah. I think the way that the design of existing organizations is going to change the most is on two dimensions. It comes down a lot to the middle management part of the organization and the organization design.
There are two major reasons why I think this is going to happen.
One: organizations will still have to do tasks, and some of those tasks will be done by humans, some of those tasks will be done by AI. But at the end of the day, tasks will have to get done. There are activities that will have to get done at the bottom layer of the organization, or the front layer of the organization, depending on how you think about it.
But those employees that are doing those tasks will need less managerial support. Right now, when you’ve got a question about how to do things, more often than not, you go to your manager to say, “How do I do this particular thing?” The reality is, AI tools, in some cases, are already better than your manager at providing that information—on how to do it, advice on what to do, how to engage a customer, whatever it might be. So employees will go to their managers less often.
So one, the manager roles will change. There will be fewer of them, and they’re going to be focusing more on relationship building, more on social-work-type behaviors—how to get people to work together—not helping people do their tasks. So I think one major change to what organizations look like is fewer managers spread across more people.
The second thing that I think will happen: when you look at what a lot of middle management does, it is aggregation of information and then sharing information upwards. AI tools will manage that aggregation and share it up faster than middle managers will.
So what will happen, I believe, is that organizations will also get flatter overall.
There’s been a lot of focus and attention on this question of entry-level jobs and AI decreasing the number of entry-level jobs that organizations need. I think that’s true, and we’re already seeing it in a lot of different cases.
But from an organizational design perspective, I think organizations will get flatter and broader in terms of how they work and operate because of these two factors: one, employees not needing their managers as much, so you don’t need as many managers; and two, that critical role of aggregation of information and then dissemination of information becomes much less important in an AI-based world.
So if you had frontline employees reporting to managers, managers reporting to managers, managers reporting to VPs, VPs reporting to CEOs—at least one of those layers in the middle can go away.
Ross: Similar trends for quite a while. And the logic is there. So can you ground us with any examples or instances?
Brian: We’re seeing the entry-level roles eliminated in all sorts of different places right now. We don’t have organizations that have actually gone through a significant reduction in staff in that middle, but that is the next big phase.
So, for example, when you look at a manager, it’s the next logical step. And if you just work through it, you say, well, what are the things that managers do? They provide…
Ross: Are there any examples of this?
Brian: Where they’ve started to eliminate those roles already? Not that I’ve seen. There are organizations that are talking about doing it, and they’re trying to figure out what that looks like, because that is a fundamental change that will be AI-driven.
There are lots of times when they’re using cost efficiencies to eliminate layers of middle management, but they’re only now starting to realize that this is an opportunity to make that organization design change. This, I think, is what will happen, as opposed to what organizations are doing right now, but they’re actively debating how to do it.
Ross: Yeah. I mean, that’s one of the things where the raw logic you’ve laid out seems plausible. But part of it is the realities of it, as in some people will be very happy to have less contact with their manager.
A lot of it, as you say, is an informational role. But there are other coaching, emotional, or engagement roles where, depending on the culture and the situation, those things may surface and become less.
We don’t know. We don’t know until we point to examples, though, which I think support your thesis. One is an old one but is relevant: Jensen Huang has, I think, something like 40 direct reports. He’s been doing that for a long time, and that’s a particular relationship style.
But I do recall seeing something to the effect that Intel is taking out a whole layer of its management. That’s not in a similar situation—same industry, but extremely different situation—yet it points to what you’re describing.
Brian: I can give you an example of how the managerial role is already starting to change. There are several startups, early-stage companies, whose product offering has been managerial training. You come, you do e-learning modules, you do other sorts of training for managers to improve their ability to provide feedback, and so on.
The first step they’re engaging in is creating a generative AI tool, just a chatbot, that a manager can go to and say, “Hey, I’m struggling with this employee. What do I do around this thing versus that thing?”
So where we’re seeing the first frontier is managers not talking to their HR business partner to get advice on how to handle employees, but managers starting to talk to a chatbot that’s based upon all the learning modules that already existed. They’re putting that on top to decrease the number of HR business partners they need.
But it begs the second question: if an employee is struggling with a performance issue, why should they have to go to their manager, and then their manager go to a tool?
So the next evolution of these tools is the employee talking directly to a chatbot that is built on top of all the guides, all of the training material, all of the information that was created to train that employee the first time. We’re starting to see companies in the VC space build those sorts of tools that employees would then use.
That’s one part of it. Here’s another example of where we’re seeing the managerial role get eliminated. One of the most important parts historically of the managerial role is identifying who the highest performers are.
There are a couple of startup companies creating new tools to layer on top of the existing flow of information across the organization, to start identifying—based on conversations and interactions among employees, whether video, email, Slack, or whatever channels—who is actually making the bigger contributions.
And when they’ve gone back and looked at it, one of the things they found is that about two-thirds of the employees who get the highest performance review scores are actually not making the highest contributions to the organization. So it’s giving a completely different way to assess and manage performance.
Ross: Just to round out, because we want to get to the third point. And I guess, just generally reflecting on what you’re saying. I mean, AI feeds on data. We have far more data. And so there’s a whole layer of issues around what data can we gather around employee activities, behaviors, etc., which are useful and flows into that.
But despite those constraints, there is data which can provide multiple useful perspectives on performance, amongst other things, and feedback to be able to feed on that. But I want to round out with your third point around leaders—getting leaders to use the tools to the point where they are A, comfortable, and B, competent, and C, effective leaders in a world which is more and more AI-centric.
Brian: Yeah. Here’s part of the reality. For most leaders, if you look at a typical company, most leaders are well into their 40s or later. They have grown up with a set of tools and systems to run their business. And those are the tools that they grew up with, which is like moving to an internet age. They did not grow up in this environment.
And as I mentioned earlier, most of them do not feel comfortable in this environment, and their advice is just go and experiment with different things. This is the exact same advice if you roll the clock back to the start of the internet in the workplace, or the start of bring your own device to work. It was experiment with some stuff and get comfortable with it.
And in each of those previous two situations—when should we give people access to the internet at work, should we allow people to bring their own devices—most companies wasted a year or two or three years because their leaders had no idea what to do. And the net result of most of that is people use these tools to plan their vacations or to do slightly better Google searches.
This is what’s going to happen now if we don’t change the behavior and approaches of our leaders. So in order to actually get the organization to work, in order to get the right incentives in place, you need to have leaders that are willing to push much harder on the AI front and develop their own skills and capability and knowledge around that. There’s a lot of…
Ross: Any specifics again, just any overall practices or how to actually make this happen?
Brian: Yeah. So there’s kind of a series of maturities that we’re seeing out there in organizations.
There’s a ton of online learning that leaders can take to get them familiar with what AI is capable of. So that’s kind of maturity level one: just build that sort of awareness, create the right content material that they can access to learn how to do things.
Maturity level two is change who is advising them. Most leaders go through a process where the people that are advising them are people that are more experienced than them, or their peers. So what we’re seeing organizations do is starting to create shadow cabinets of younger employees who have actually started to grow up in the AI age, where they’re forced to spend time with them.
So each leader is given a shadow cabinet of four or five employees that are actually really familiar with AI, and that leader actually then has to report back to those junior employees about what they’re actually doing from an AI perspective. That’s a forcing mechanism to make sure that something happens with people that are more knowledgeable about what’s going on.
So that’s kind of a second level of maturity that we’re starting to see play out.
For the leaders that are truly making progress here, what we’re actually seeing is that they’re creating environments where failure is celebrated. When you think back to a lot of the early IT stages, and a lot of the early IT innovation, it’s fraught with failure. More things don’t work than do work.
So they are creating environments and situations where they’re actually celebrating failure to reduce risk that’s associated with employees. And so they’re creating environments where, “I failed, but we’ve learned,” and that’s really valuable.
Then the fourth idea, and this is what IDEO is doing. IDEO is a design consultancy, and they do something really, really interesting when it comes to leaders. What they’ve come to realize is that leaders, by definition, are people that have been incredibly successful throughout their career. Leaders also, by definition, hate to ask for help, because many of them view it as a weakness. Leaders also, by definition, like to celebrate the great stuff that they’ve done.
So what they actually do—and they do this about every six months or so—every leader has to film and record a short video. And that video is: here are the cool things that I did using AI across the last six months, and here are the next set of things that I’m going to do, that I’m working on, where I’m thinking about using AI for the next six months. And every leader has to do that.
And what that actually achieves—when you have to record that video and then show that to everybody—is that if you haven’t done anything in the last six months, you kind of look like a loser leader. So it puts pressure on that leader to actually have done something that’s interesting, that they have to put in front of the broader organization.
And then the “what I’m going to work on next,” they’re not actually asking for help, so it really works with a leader psyche, but they’re saying, “Here are the next things I’m going to do that are awesome.” And that gives other leaders a chance to say, “Hey, I’m working on something similar,” or, “Oh, I figured that out last time.”
So it takes away a lot of the fear that’s associated with leaders, where they have to fake that they know what they’re doing or lie about what’s working. But it forces them to do something, because they have to tell everyone else what they did, and it creates the opportunity for them to get help without actually asking for help.
That is a really cool way that organizations are getting leaders to embrace AI, because none of them want to stand up in front of the company and be like, “Yeah, I haven’t really been doing anything on this whole AI issue for the last six months.”
Ross: That’s great. That’s a really nice example. It’s nice and tangible, and it doesn’t suit every company’s culture, but I think it can definitely work.
Brian: Yeah, the takeaway from it is put pressure on leaders to show publicly that they’re doing something. They care about their reputation, and whatever way makes the most sense for you as an organization, put the pressure on the leader to show that they’re doing something.
Ross: Yeah, absolutely. So that’s a nice round out. Thanks so much for your time and your insight, Brian. It’s been great to get the perspectives on building AI adoption.
Brian: Great. Thanks for having me, Ross. And this is a time period where there’s an analogy that I like to use in a car race: people don’t pass each other in straightaways, they pass each other in turns. And this is a turn that’s going on, and this creates the moment for organizations to pass each other in that turn.
And then one other racing analogy I think is really important here: you accelerate going into a turn. When you’re racing, you don’t decelerate. Too many companies are decelerating. They have to accelerate into that turn to pass their competitors in the turn. And whoever does that well will be the companies that win across the next 3, 5, 7 years until the next big thing happens.
Ross: And it’s going to be fun to watch it.
Brian: For sure, for sure.
The post Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17) appeared first on Humans + AI.
170 episodes