Best of LinkedIn: Digital Products & Services CW 50 - 01

Show notes

We curate most relevant posts about Digital Products & Services on LinkedIn and regularly share key take aways.

This edition provides a comprehensive overview of evolving product management strategies and the integration of artificial intelligence into corporate operations for 2025-2026. Experts emphasize a shift from traditional project-based delivery to the Product Operating Model, which prioritises outcomes over outputs and empowers cross-functional teams. A significant focus is placed on AI-native development, where agents assist with coding, research, and documentation to accelerate delivery while maintaining high user experience standards. The texts also highlight the critical importance of product discovery, strategic alignment with business goals, and the necessity of human leadership to bridge the gap between technical models and real-world utility. Furthermore, the collection offers practical guidance on prioritisation frameworks, career development for product managers, and the foundational role of Product Lifecycle Management (PLM) in digital transformation. Ultimately, these insights suggest that while AI enhances velocity, sustainable success still depends on customer-centricity, rigorous experimentation, and clear organizational communication.

This podcast was created via Google Notebook LM.

Show transcript

00:00:00: This episode is provided by Thomas Allgaier and Frennus based on the most relevant LinkedIn posts about digital products and services in calendar weeks, fifty zero one.

00:00:09: Frennus is a B to B market research company helping enterprises gain the market, customer and competitive insights needed to drive customer centric and cost efficient product development.

00:00:21: Welcome back to the deep dive.

00:00:23: Today we're really shifting gears.

00:00:25: Over the last few weeks, it feels like the whole conversation in the product community online has moved from, you know, the big conceptual ideas to pure execution.

00:00:33: It really has.

00:00:34: The focus, especially around the new year, was intensely practical.

00:00:37: It wasn't about if we should use AI anymore.

00:00:40: No, not at all.

00:00:40: It was all about how.

00:00:42: How do we actually make it work?

00:00:43: How do we redesign our operating models so that this tech delivers real business value, not just, you know, another cool feature?

00:00:50: That's right.

00:00:51: And that's our mission today.

00:00:53: We've distilled hundreds of posts, all centered on building better user experiences, fixing product operations, and finding real growth strategies.

00:01:00: We're unpacking the most actionable trends from the LinkedIn community for you.

00:01:03: Okay, let's dive in.

00:01:04: First theme, and it's the big one.

00:01:06: AI and product management, and what we're calling the PM evolution.

00:01:10: Yeah, and the conversation here has matured so quickly.

00:01:13: We're way past the simple GPT-IV versus CLAWD debates.

00:01:17: Thank

00:01:18: goodness.

00:01:18: Right.

00:01:19: Now it's about designing robust experiences and maybe more importantly, how to keep these AI models performing once they're actually live in production.

00:01:28: That brings up the skill set shift, which is happening at just breakneck speed.

00:01:31: For

00:01:31: sure.

00:01:32: Mahesh Yadav put it really bluntly.

00:01:33: He said, generic product frameworks are not going to save you in twenty twenty five.

00:01:38: So all those templates he downloaded are useless?

00:01:40: Pretty much.

00:01:42: He says hiring managers now want deep niche insights.

00:01:45: They want to see real side projects.

00:01:47: They want strategic thinking, not just someone who can recite answers in an interview.

00:01:51: That's a huge shift.

00:01:52: It's about learning by doing, not just learning about it.

00:01:55: So what about coding?

00:01:56: That question comes up constantly.

00:01:58: Well, Moali had a great take on this.

00:01:59: He argued that BMs don't need to be coders, not in the traditional sense.

00:02:03: There's always a but.

00:02:04: But they absolutely must build what he calls model intuition.

00:02:08: Model intuition.

00:02:10: I like that.

00:02:10: So what does that actually mean?

00:02:12: It's understanding why the output from an LLM changes or where a hallucination is coming from.

00:02:18: It's getting a feel for how these models behave under real-world pressure in a production environment.

00:02:24: So you have to understand its limits and its failure modes, not just the happy

00:02:28: path.

00:02:28: Exactly.

00:02:29: Without that, you're basically flying blind.

00:02:32: And the whole point of that intuition, I guess, is to make the technology itself just... Disappear.

00:02:37: Yes.

00:02:38: Badminton Montree said the most successful AI products feel like magic.

00:02:44: Not like AI products.

00:02:45: Think about something like Grammarly.

00:02:46: You don't think about the AI.

00:02:47: You're just like, wow, my writing is better.

00:02:49: Right.

00:02:50: The user experience comes first.

00:02:51: If the user has to stop and wonder if they're talking to the AI or the app, you've already lost.

00:02:56: And

00:02:56: that's where the magic breaks, right?

00:02:57: The reliability challenge.

00:02:59: This is the big one.

00:03:00: And J. O. T. Nukola shared a cautionary tale that was, I mean, it was just brutal.

00:03:04: It perfectly highlights this gap between technical accuracy and actual value.

00:03:09: Oh, this was the story about the support ticket model, right?

00:03:12: That's

00:03:12: the one.

00:03:13: So her team deployed this model to summarize support tickets.

00:03:16: In testing, it had a ninety-three percent accuracy score.

00:03:20: Which sounds fantastic on paper.

00:03:22: It does.

00:03:23: But in practice, it was a total failure.

00:03:26: The model would take a really urgent message, something like, I am considering legal action.

00:03:31: Okay,

00:03:32: high alert.

00:03:33: And it would summarize it into this bland, neutral phrase like, customer expressed dissatisfaction.

00:03:39: Oh, wow.

00:03:40: So it just completely erases the risk, the urgency that agents would have to read the full ticket anyway.

00:03:45: Precisely.

00:03:46: It created a trust deficit and negated the entire productivity gain.

00:03:50: It proves that technical accuracy means nothing if you don't have business reliability.

00:03:54: And it

00:03:55: gets worse because these models don't just stay static.

00:03:58: Amy Mitchell warned about model drift.

00:04:00: Yeah, that's when the real world data starts to diverge from the training data and the performance just degrades over time.

00:04:05: So how do you even spot that?

00:04:07: She used this great term quiet abandonment.

00:04:10: which is so perfect.

00:04:11: It's not the user rage quitting.

00:04:13: It's when they subtly stop trusting the AI.

00:04:16: They start double checking the summary or manually editing the draft the AI wrote for them.

00:04:21: They're finding a non-AI workaround inside your app, and that's the moment your feature has failed.

00:04:27: So the solution is building those human-in-the-loop feedback systems.

00:04:31: And as Hoda Moore said about Moonshot AI, you have to frame the product around measurable outcomes, like increasing revenue, not just cool features.

00:04:39: That's how you build trust through delivered value.

00:04:42: With all this complexity, what is it due to the product manager role itself?

00:04:47: Well, Michael Heesey summed it up perfectly.

00:04:49: He said the role is bifurcating.

00:04:51: It's splitting in two.

00:04:52: Okay.

00:04:53: Into what?

00:04:53: On one side, you have AI systems management.

00:04:56: That's the person worried about cost, risk, compliance.

00:04:59: On the other, you have AI product development, focusing on the user experience, the strategy, the outcomes.

00:05:04: And those two things require totally different skills, different rhythms, even different parts of the org chart.

00:05:09: A huge challenge.

00:05:11: A PM today really has to figure out which of those two tracks they're on, because being an expert at both is becoming nearly impossible.

00:05:19: That makes a lot of sense.

00:05:20: So moving from the individual role to the whole organization, let's talk about theme two, product operating models and organizational DNA.

00:05:28: Yeah, the conversation here is really pushing companies to get past just changing job titles from project manager to product manager and calling it a day.

00:05:37: It's about moving from project thinking to product portfolios that are oriented around outcomes.

00:05:42: Exactly.

00:05:43: Roberto Lopez highlighted the success Lloyd's Banking Group had just by making that shift, moving from projects that end to products that evolve.

00:05:52: It creates clear ownership and a continuous flow of value.

00:05:55: But that's where so many companies get stuck, right?

00:05:57: Because there's a huge difference between the model and the actual system.

00:06:00: That's

00:06:00: the critical distinction that Itamar Gillid and John Cutler pointed out.

00:06:04: They separate the product operating model, the POM, from the product operating system, the POS.

00:06:09: So the model is the theory, the grand plan.

00:06:11: Think of it like the beautiful architectural drawing of a city.

00:06:14: The POS, the system, is the actual zoning laws, the building codes, the traffic lights that make the city function.

00:06:22: And companies adopt the drawing, but they never update the laws.

00:06:25: You got it.

00:06:26: So you have a PM who's told they own an outcome, but their budget and their performance review are still tied to shipping features on time and on budget.

00:06:36: The system is still measuring projects, not product value.

00:06:39: Which is why so many of these transformations just stall out.

00:06:43: Sam Quayle and Dave Baines said the biggest blocker is that leaders don't even have a shared, honest view of how things currently work.

00:06:50: Right, you can't fix what you don't understand.

00:06:53: And Sophie Johnson really reinforced that, saying you need health checks to get pragmatic metrics and a baseline.

00:06:58: Otherwise, you're just throwing money at symptoms.

00:07:00: And the real problems are almost always people problems, not tech problems.

00:07:04: Always.

00:07:06: Teresa Torres and Petra Will were very clear.

00:07:09: Continuous discovery will always fail if the leaders don't change their own habits first.

00:07:14: If an executive can just bypass the process and demand a feature, the teams will eventually stop bothering to validate anything.

00:07:22: That's where HR or the people function comes in.

00:07:25: I thought this point from Emma as a party was fascinating.

00:07:28: It was.

00:07:29: She said you can't run a modern product model with a legacy HR function.

00:07:33: HR needs to align to your value streams, not to old-school things like geography.

00:07:37: Otherwise, you get what she called speed drag.

00:07:40: You've built a race car, but the engine is from a tractor.

00:07:42: A perfect analogy.

00:07:44: And if PMs want to actually drive that race car, they need to earn their seat at the strategic table.

00:07:49: Which means they need to speak the language of business.

00:07:51: That was the blunt truth from Shardomitha.

00:07:53: He said if PMs can't talk about revenue, margin, cost, and risk, they will stay stuck in delivery work.

00:07:59: They become, in his words, the CEO of nothing.

00:08:04: That's harsh, but probably true for a lot of people.

00:08:06: It

00:08:07: is.

00:08:07: And Paul Hurin gave PMs the cheat sheet.

00:08:10: He laid out the four questions that executives will always ask.

00:08:13: Okay, what are they?

00:08:14: First, what is it?

00:08:16: Meaning, what's the outcome?

00:08:17: Second, why does it matter?

00:08:19: Which business lever does it pull?

00:08:21: Revenue, margin, or risk.

00:08:23: Exactly.

00:08:24: And third, when do I get it?

00:08:27: which means the timeline for learning, not for delivery.

00:08:31: And finally, how much does it cost me?

00:08:34: And that includes everything, especially opportunity cost.

00:08:37: If you can't answer those four questions, you're not having a strategic conversation.

00:08:40: You're just defending a feature.

00:08:42: And you'll stay the CEO of nothing.

00:08:44: That's a perfect transition to our third theme.

00:08:47: accelerating discovery and sustainable growth.

00:08:49: The consensus across the board was that most product failures trace back to just really weak discovery.

00:08:55: Or no discovery at all.

00:08:57: Just a poor understanding of the problem before a single line of code is written.

00:09:00: David Perera actually called this the great product delusion era.

00:09:04: The time when we're just slapping AI onto everything without really knowing why.

00:09:08: And that failure to validate is making people question our most basic tools.

00:09:12: Tim Herbig had a great post arguing that the classic impact effort matrix can actually sabotage good discovery.

00:09:18: How so?

00:09:19: I feel like everyone uses that.

00:09:21: Well, his point is that early on impact is a total guess.

00:09:26: And scoring effort encourages teams to just pick the low hanging fruit.

00:09:31: the easy wins.

00:09:32: Instead of testing the really critical, risky assumptions that could make or break the whole product.

00:09:38: Exactly.

00:09:39: It biases you toward building easy things instead of the right things.

00:09:43: So we need to rethink what experimentation even is.

00:09:46: John McDonald said it can't just be about A-B testing a button color.

00:09:50: Right.

00:09:50: It has to be continuous exploration across the whole life cycle.

00:09:54: He pointed out that the best product-led growth, or PLG companies, are the ones rewriting their playbook constantly.

00:10:00: based on what they learned from users.

00:10:01: But is PLG the answer for everyone?

00:10:03: That was a point of contention.

00:10:05: It was.

00:10:05: On one hand, Yanua Lua Ediemi championed PLG, where growth comes from the product just solving a problem really, really simply.

00:10:12: Makes sense.

00:10:13: But then Seamus Moore made a great counterpoint for bigger companies.

00:10:16: He argued that for a complex, multi-product brand, PLG isn't enough anymore.

00:10:21: you need a broader market-led strategy.

00:10:23: To

00:10:24: solve the market's bigger problem and tell one unified story, not just optimize one tool.

00:10:30: Precisely.

00:10:32: It's about managing that complexity.

00:10:34: And that means you have to be really deliberate in how you balance your roadmap.

00:10:37: Yeah, that analysis from Lubomar Piftorak was eye-opening.

00:10:41: It really was.

00:10:42: He showed that just four big high leverage bets delivered almost as much retention uplift as thirteen small quick wins combined.

00:10:51: That's

00:10:51: a stunning piece of data.

00:10:53: It tells you that you have to consciously protect space for those bigger riskier initiatives.

00:10:58: You have to.

00:10:58: And Stephanie Liu gave a great framework for deciding when to take those risks.

00:11:03: It all depends on your strategic constraint.

00:11:06: If the cost of being wrong is cheap like an early stage startup, then speed is your form of research.

00:11:11: You just build and learn.

00:11:12: But

00:11:12: if the cost of failure is high, like in a barking app or a medical device,

00:11:16: then deep discovery isn't a luxury.

00:11:18: It's an insurance policy.

00:11:20: It's essential protection against making a very expensive mistake.

00:11:23: That's such a clear way to frame it.

00:11:25: OK, let's pull this all together with our final theme.

00:11:28: Integrated platforms, tools, and agentic AI.

00:11:31: This is about actually getting the speed we need.

00:11:34: And Alex Barati proposed that the real bottleneck in software delivery isn't our tools.

00:11:39: the handoffs.

00:11:40: The gaps between teams.

00:11:42: PM to design, design to engineering, engineering to QA.

00:11:46: All of it.

00:11:47: He says the fix is to deploy AI agents.

00:11:50: A PM agent, an architect agent, a QA agent that passed the work between them with full context.

00:11:55: He thinks that could speed up delivery by forty to seventy percent.

00:11:59: Wow.

00:12:00: And A. Patrick Zelamea added that these agents could handle all the coordination overhead, the status updates, the meeting notes,

00:12:07: all the stuff that drains a PM's time.

00:12:09: It would free up humans to focus purely on strategy and validation.

00:12:12: And PMs are not waiting around for this.

00:12:14: They're building their own personal AI systems now.

00:12:16: Right.

00:12:17: Dr.

00:12:17: Wells-Vanderberg talked about using tools like Claude Code to create a structured second brain for her work, feeding the AI just the right local context it needs.

00:12:26: But that requires judgment, right?

00:12:28: You have to know when to trust the tool.

00:12:30: Absolutely.

00:12:30: Katya Handsome's internal product wizard experiment proved that.

00:12:34: Teams need structured practice with these tools to learn when to trust the output.

00:12:39: and maybe more importantly, when not to.

00:12:41: So to make all this work at scale, you need a solid foundation.

00:12:45: You need a digital backbone.

00:12:47: And that backbone is Product Lifecycle Management, or PLM.

00:12:51: Andreas Lindenthal explained that PLM is the digital thread that connects everything.

00:12:56: Product data processes people all the way from engineering to finance.

00:13:01: It's essential for any real industry, four point oh effort.

00:13:04: But you can't just bolt it on.

00:13:05: Kevin McEachern warned that if you upgrade your ERP system without thinking about PLM, you're just creating new silos.

00:13:11: You'll have a total digital disconnect.

00:13:13: He said you have to define a common data model and get engineering involved early to make sure the data actually flows.

00:13:19: And you can't just pour old processes into new systems.

00:13:22: No.

00:13:22: That's what Venkatesh Krishnan, citing Elon Musk, called paving the cow path, just lifting and shifting a broken twenty-year-old process into a shiny new PLM system.

00:13:31: So what's the right order?

00:13:32: The first principle's order.

00:13:33: First, you question the requirements.

00:13:36: Then you delete parts of the process.

00:13:38: Then you simplify.

00:13:39: Then you accelerate.

00:13:41: And only at the very end do you automate.

00:13:43: You automate the clean, simple process, not the old, broken

00:13:46: one.

00:13:46: Exactly.

00:13:47: Don't use new tech to speed up old inefficiencies.

00:13:50: That's

00:13:50: a fantastic summary of the operational challenge right now.

00:13:53: And looking forward, Kevin Thomas made a prediction that human thinking will be needed less and less for routine tasks as AI agents just become a normal part of our workflow.

00:14:02: And this brings up a really provocative question from Mila Shigoy.

00:14:05: Okay, let's hear it.

00:14:06: If AI becomes the best at drafting, at synthesizing, at all that coordination work, and our job as humans becomes being the architects of intent, setting the goals and the constraints, what is the single most important human skill that product leaders need to protect and refine in this new era?

00:14:24: The architect of intent.

00:14:26: I like that.

00:14:26: That's a powerful challenge to think about.

00:14:29: If you enjoyed this deep dive, new episodes drop every two weeks.

00:14:32: Also, check out our other editions on ICT and tech, artificial intelligence, cloud sustainability and green ICT, defense tech and health tech.

00:14:40: Thank you for diving deep with us.

00:14:41: And don't forget to subscribe to ensure you catch the next deep dive.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.