Best of LinkedIn: Digital Products & Services CW 02/ 03

Show notes

We curate most relevant posts about Digital Products & Services on LinkedIn and regularly share key take aways.

This edition collectively explore the evolving Product Operating Model and the profound impact of Artificial Intelligence on product management. Practical insights highlight the shift from traditional feature-driven delivery to outcome-based strategies, emphasising that organizational success requires cross-functional alignment and clear ownership. Expert contributors argue that AI is infrastructure rather than a mere feature, demanding new skills such as error analysis, automated prototyping, and human-in-the-loop validation. Leaders are urged to move beyond industry jargon and complex frameworks to focus on building trust, protecting strategic focus, and simplifying communication. The collection also provides tactical resources, including podcasts, training cohorts, and readiness assessments, to help teams navigate the technical and cultural transitions of 2026. Ultimately, the texts advocate for a builder mindset that prioritises customer value and rigorous experimentation over administrative busywork.

This podcast was created via Google Notebook LM.

Show transcript

00:00:00: This episode is provided by Thomas Allgaier and Frennus based on the most relevant LinkedIn posts about digital products and services in calendar weeks O two and O three.

00:00:10: Frennus is a B to B market research company helping enterprises gain the market customer and competitive insights needed to drive customer centric and cost efficient product development.

00:00:20: It's great to be back.

00:00:20: We're doing a really specific deep dive today looking at digital products and services at the start of twenty twenty six.

00:00:27: It really feels like the dust is finally settling.

00:00:30: The last couple of years were just this frantic AI gold rush.

00:00:35: Everyone was just scrambling to slap a generative sticker on their product.

00:00:38: Right,

00:00:38: but early twenty-twenty-six feels different.

00:00:41: Yeah.

00:00:41: We're seeing this pretty clear shift from hype to, well, reliability.

00:00:46: The talk isn't about what AI could do anymore.

00:00:48: It's about what it's actually doing in production and, frankly, how often it's breaking.

00:00:52: So today we've got a lot to get through.

00:00:54: We're talking agentic workflows, the hard reality of what some are calling vibe coding.

00:00:59: And some pretty tough truths about how product organizations are really structured versus how we like to pretend they are.

00:01:05: Let's

00:01:05: kick off with that big shift in AI.

00:01:07: For so long, AI was just this shiny feature.

00:01:10: You know, you'd have your app and then the ask AI button tucked away in the corner.

00:01:14: The bolt-on approach.

00:01:15: Yeah.

00:01:16: But based on the sources we're seeing, that era is just over.

00:01:21: Jeffrey Ollie makes this really compelling argument.

00:01:23: He says AI isn't a feature anymore.

00:01:26: It's becoming the infrastructure.

00:01:27: He has a specific phrase for that, right?

00:01:29: Moving from systems of record to systems of action.

00:01:33: Can we unpack that?

00:01:34: Because system record sounds like pretty much every piece of enterprise software out there.

00:01:38: Exactly.

00:01:39: Think of your CRM.

00:01:40: Its job is to just sit there and hold beta.

00:01:42: It's passive warehouse.

00:01:43: You put data in, you take it out.

00:01:45: Right.

00:01:45: Jeffrey Ollie's point is that AI is turning software into a system of action.

00:01:50: It doesn't just hold the data.

00:01:52: It reasons over it.

00:01:53: And it actually does something.

00:01:55: It executes the work.

00:01:56: And we are seeing some, I mean, staggering metrics on what that execution can look like.

00:02:01: Martin McDonough showed a case study that really stood out.

00:02:03: Oh,

00:02:03: the service design project one?

00:02:04: Yes.

00:02:05: They were analyzing this huge digital portfolio, which is normally just a complete slog.

00:02:10: It is.

00:02:11: It's weeks of reading documents, mapping user flows.

00:02:15: But Martin's team... used AI for that heavy lifting, the initial analysis.

00:02:19: And the results were pretty incredible.

00:02:22: They got the workload by eighty percent.

00:02:24: They went from twenty days of pure analysis down to just

00:02:27: four.

00:02:27: That's massive.

00:02:29: But here's my usual skepticism with stats like that.

00:02:32: Does that just mean the company gets to fire people or does the work actually improve?

00:02:35: Efficiency can be a loaded term.

00:02:37: That's the key question.

00:02:39: But in this case, it was explicitly about reallocating their time.

00:02:43: They took those sixteen days they saved and use them for synthesis, for actually talking to the client, understanding the nuance, and solving the real problem.

00:02:51: So less time reading, more time thinking.

00:02:53: Exactly.

00:02:54: Okay, so if we're handing off that reading and processing to an AI, we have to be able to trust it's doing a good job.

00:03:00: Right.

00:03:00: And this brings us to a major technical theme from the sources, AI evaluations, or ESOLs.

00:03:06: This is the piece that's missing for so many teams.

00:03:09: Akash Gupta has been very vocal about this.

00:03:12: He basically argues that product managers need to stop treating AI like a magic box.

00:03:17: And start treating it like an engineering problem.

00:03:19: Right.

00:03:19: You can't just prompt and pray.

00:03:21: Prompt and pray.

00:03:22: I love that.

00:03:23: It's definitely the default strategy for so many people.

00:03:25: You tweak the prompt, it looks good once, and you ship it.

00:03:28: And

00:03:28: then a user asks something slightly different, and the whole thing just falls apart.

00:03:32: So what does Akash say we should be doing instead?

00:03:35: He gets very tactical.

00:03:36: First, he says, to stop using these generic helpfulness scores, they're too subjective.

00:03:42: Was this helpful?

00:03:43: Yeah, that's a vanity metric.

00:03:45: A

00:03:45: complete vanity metric.

00:03:46: He advocates for what he calls binary judges.

00:03:49: Binary

00:03:50: judges.

00:03:50: Yeah.

00:03:51: Pass or fail.

00:03:51: Yeah.

00:03:52: Did the AI pull the correct document?

00:03:55: Yes or no?

00:03:56: Mm-hmm.

00:03:56: Did it hallucinate a number?

00:03:58: Yes or no?

00:03:59: Business logic is usually binary, so you need automated tests that check these constraints.

00:04:04: every single time you change a prompt.

00:04:06: It's basically unit testing for language models.

00:04:08: Precisely.

00:04:09: If you change the prompt to make the tone a bit friendlier, you need to know instantly if you just broke the core logic.

00:04:16: Okay, that makes sense.

00:04:17: But then he adds another layer of that.

00:04:20: Well, I think a lot of PMs are going to hate this part.

00:04:22: He insists on manual error analysis.

00:04:25: He says you have to sit down and read, one hundred traces.

00:04:28: A hundred conversations, line by line.

00:04:30: Yeah, a hundred actual AI conversations.

00:04:33: Reading through a hundred logs of JSON data, that sounds like absolute torture.

00:04:37: It does sound tedious, but his point is, you spot patterns that you would never, ever find on a dashboard.

00:04:43: You start to see why the model is getting confused.

00:04:45: You realize, oh, it fails every time a user asks about pricing in euros.

00:04:49: You

00:04:49: can't fix what you don't actually see.

00:04:51: You can't fix what you don't read.

00:04:53: Okay, so we've got the infrastructure, we've got the evaluation methods.

00:04:57: Now let's talk about what we're actually building.

00:04:59: The buzzword for twenty-twenty-six is, without a doubt, agents.

00:05:04: But Paul Wehran points out that most people are just building slightly better chatbots and calling them agents.

00:05:09: There's a huge difference.

00:05:10: Palos lays out a seven-step process for building real agents, and it goes way beyond just text in, text out.

00:05:18: Starts with a system.

00:05:18: prompt, sure, but then you have to pick a reasoning model.

00:05:21: He mentions the GPT-V family.

00:05:23: And this is the crucial part.

00:05:25: Give it access to tools.

00:05:27: Tools, as in it can actually check a calendar or query a database or maybe send an email.

00:05:33: Exactly.

00:05:33: An agent has memory.

00:05:35: It can orchestrate a plan.

00:05:37: It doesn't just answer a question.

00:05:38: It solves a multi-step

00:05:40: problem.

00:05:40: It has agency.

00:05:42: but this is where we hit that classic wall.

00:05:44: You can build the smartest agent in the world, but Raina Monforti brings up this very human problem.

00:05:50: She says the technology isn't the blocker.

00:05:52: It's trust.

00:05:52: It's trust.

00:05:53: Yeah,

00:05:53: this is a fascinating story.

00:05:55: She shared an example of an ops team.

00:05:57: Management rolls out this fancy new LLM tool to automate their workflow.

00:06:01: Technically, it's perfect, but adoption is totally stuck at twenty percent.

00:06:05: The team just wouldn't use it.

00:06:07: Were they worried it was going to hallucinate and make them look bad or take their jobs?

00:06:11: A bit of both, I think, but mostly just a fear of the black box.

00:06:14: They didn't understand how it worked.

00:06:16: So the fix wasn't technical at all.

00:06:18: They changed the workflow to include human in the loop steps.

00:06:22: So they put a person at a key approval point.

00:06:24: Yes.

00:06:25: They designed specific points where a human had to review and approve the AI's work.

00:06:30: And what happened to adoption?

00:06:32: Shot up to one hundred percent.

00:06:34: By making the automation less automated, they actually made it usable.

00:06:38: It gave the team psychological safety and they felt like they were still in control.

00:06:41: That connects perfectly with Jacqueline Reinhart's point on security.

00:06:45: Because if we're building these agents, that can take real action.

00:06:49: delete files, read emails.

00:06:52: Trust isn't just a feeling, it's an architecture.

00:06:55: You can't just hope the AI won't go rogue.

00:06:58: Jacqueline argues for a hybrid architecture.

00:07:00: Yeah.

00:07:01: Use the gen AI for the reasoning part, the creative thinking.

00:07:04: But you wrap it in what she called deterministic AI.

00:07:06: Right.

00:07:07: Deterministic meaning hard-coded rules, old-fashioned code.

00:07:10: You don't ask the LLM, hey, do you think this user should see this sensitive file?

00:07:14: No, you check the permissions database.

00:07:16: Exactly.

00:07:17: You put the AI in a straight jacket of strict rules.

00:07:20: That's how you prevent things like prompt injection.

00:07:22: You have to separate the brain from the keys to the castle.

00:07:25: That feels like a non-negotiable step if agents are going to do any real work.

00:07:29: Now, speaking of work, this whole shift is causing a bit of an identity crisis for product managers.

00:07:36: We've got this new term floating around, vibe coding.

00:07:39: I love this term, vibe coding.

00:07:41: It just, it captures the moment perfectly.

00:07:44: It sounds like something a DJ would do, not a developer, but it's a real thing, right?

00:07:48: Oh, it's very real.

00:07:50: Lenny Raczitzki put a spotlight on this with Xavier Arnovitz from Meta.

00:07:54: Now, Xavi is a non-technical PM.

00:07:56: He says he's scared to even look at a GitHub repo.

00:07:59: But with tools like Cursor and Claude Code, he's building real functional products.

00:08:04: So

00:08:04: the idea is you just tell the AI the vibe you want, make this pop, add a login screen, connect this to Stripe, and it just does it.

00:08:11: It's incredibly powerful for prototyping, but a very strong counterargument is emerging.

00:08:17: Aitir Abdul Raf warns that we're confusing building with engineering.

00:08:21: Okay, what's the distinction he's making?

00:08:23: If the app works, does it really matter how it was made?

00:08:33: But what happens when you need to integrate with some legacy API, or the system architecture needs to scale?

00:08:38: Yeah,

00:08:38: when the app breaks, you can't just tell the AI, the vibe is off, please fix it.

00:08:42: You need to know how to read an error log.

00:08:44: You need to know what a four-oh-four is.

00:08:46: ATR says PMs need to understand APIs and system design more than ever.

00:08:50: Otherwise, you're just creating a huge mess that a real engineer has to come in and clean up later.

00:08:55: David

00:08:55: Pereira had a great take on this.

00:08:56: He said, vibe coding doesn't fix bad product thinking.

00:09:00: That is the bottom line.

00:09:01: It doesn't matter if you use Python or some magic pattern tool for prototyping.

00:09:06: The code is just the delivery mechanism.

00:09:09: If you don't understand the user problem, you're just building the wrong thing faster than ever before.

00:09:14: You're just scaling bad decisions.

00:09:16: And Jayzer Bratzidev points out this very real danger for AIPMs.

00:09:21: They get so sucked into the building, the tweaking, the debugging.

00:09:24: But

00:09:24: they stop doing strategy.

00:09:26: They become the engineer they are trying to avoid being.

00:09:29: And strategy is always the first thing to go when you're busy firefighting.

00:09:33: Which is a perfect segue to our next big theme, the organization itself.

00:09:38: Because you can have the best PMs, the smartest AI, but if the company structure is broken, it just doesn't matter.

00:09:45: It really doesn't.

00:09:46: Yeah.

00:09:46: And we're seeing some interesting pushback against that ideal product model.

00:09:51: For years, the story was all about empowered teams.

00:09:54: Give the team a goal and get out of the way.

00:09:56: Right.

00:09:56: But Aunt Murphy points out.

00:09:58: that in twenty twenty five, and now in twenty twenty six, we've seen this huge swing back to what he calls founder mode.

00:10:04: Founder mode.

00:10:05: That sounds like micromanagement with better branding.

00:10:07: It kind of is.

00:10:08: It's executives taking top-down control.

00:10:10: They're basically saying, I don't care about your discovery process.

00:10:13: I want this feature shipped by Tuesday.

00:10:16: It's a reaction to feeling like they've lost control.

00:10:18: So the autonomy of the product team is actually shrinking in some places.

00:10:21: In many places, yes.

00:10:23: The focus shifts entirely to delivery metrics.

00:10:25: Output over outcome.

00:10:27: How many story points did we ship?

00:10:29: Not, did we actually move the needle for the business?

00:10:32: That sounds incredibly depressing for anyone who actually loves building products.

00:10:37: But is there a way to do this right?

00:10:39: How do you implement a good operating model without it just collapsing?

00:10:44: Darren Emory has some very practical advice here.

00:10:46: He says, the biggest mistake is the Big Bang transformation.

00:10:50: You know, the all hands email that says, starting Monday, we are all empowered squads.

00:10:54: And for the next six months, nothing works because no one knows who to talk to.

00:10:58: Exactly.

00:10:59: Deer and suggests a pilot.

00:11:01: Start small.

00:11:02: One value stream.

00:11:04: One outcome.

00:11:05: One empowered team.

00:11:06: And give them ninety days.

00:11:08: Prove that this way of working actually works.

00:11:11: Then you expand.

00:11:12: Prove it before you scale it.

00:11:13: Seems logical.

00:11:14: And Sven van Horbeek adds a crucial first step to that.

00:11:16: He says, don't even start with the teams.

00:11:18: Start with the WHY.

00:11:20: Why are we doing this?

00:11:21: Is it to innovate faster, reduce churn?

00:11:24: If you can't state the business reason, you're just rearranging deck chairs.

00:11:27: It's changed for the sake of a new org chart.

00:11:30: And Melissa Apple points out another classic failure mode, which is trying to change the product team in a vacuum.

00:11:36: Oh, this is so common.

00:11:37: You tell PMs, your mini CEOs know.

00:11:40: But engineering is still run like a ticket-taking service desk.

00:11:43: And sales is still out there selling features from a roadmap a year in advance.

00:11:47: And the PM is just crushed in the middle.

00:11:49: They get all the responsibility, but none of the actual authority.

00:11:53: Melissa's point is that it's an ecosystem.

00:11:55: If your partners in sales and engineering aren't aligned, the model will fail, period.

00:12:01: You can't be an agile island in a waterfall ocean.

00:12:03: Afonso Malofranco had a great way of putting this.

00:12:06: He warned against the romantic narratives of big tech.

00:12:10: Yes, we all look at Spotify or Google and see these perfect autonomous pods.

00:12:15: But Afonso reminds us that those pods are propped up by a massive support structure, program managers, ops teams, platform teams.

00:12:23: You can't just copy the squad model without the scaffolding.

00:12:26: So how does a company know if they're actually ready for this kind of shift?

00:12:30: Dave Baines offers a health check.

00:12:32: It's basically diagnostic to see if your day-to-day reality matches your ambition.

00:12:36: It forces some really honest self-reflection.

00:12:39: Okay, let's bring this down to the individual.

00:12:41: We've talked tech, we've talked org charts, but what about the PM at their desk on a Monday morning?

00:12:47: How do they survive all this?

00:12:49: First, they have to reclaim their time.

00:12:52: Stephanie Liu shared a really alarming statistic.

00:12:55: She says, seventy percent of a PM's week is reactive work.

00:12:59: answering slacks, putting out fire, sitting in meetings they don't need to be in.

00:13:03: Seventy percent.

00:13:04: That leaves almost no time for deep strategic thinking.

00:13:09: Stephanie argues that leadership has to actively protect that focus time.

00:13:13: Maybe two hours of uninterrupted time every other day.

00:13:16: That sounds like a dream, but in one of those founder mode companies, that's got to be a tough sell.

00:13:20: Why aren't you answering my messages?

00:13:21: It's hard, but the alternative is just being a busy fool.

00:13:25: Which brings us to prioritization.

00:13:27: Groshan Gupta has some great advice on handling stakeholders.

00:13:31: The classic struggle.

00:13:32: Sales wants a feature, marketing wants a launch date.

00:13:35: And the PM just says, I'll try.

00:13:36: And I'll try is the worst possible answer.

00:13:38: It just erodes your credibility.

00:13:40: Groshan says you have to treat feedback as decision input, not a command.

00:13:44: So how do you say no without, you know, getting fired?

00:13:48: You reframe the question.

00:13:50: When a stakeholder asks for feature X, You don't just say no.

00:13:54: You ask, OK, to make space for X, what are we going to drop?

00:13:57: Or you ask, which of our OKRs does this move forward?

00:14:01: You make them do the trade-off math with you.

00:14:04: Exactly.

00:14:05: You turn it from a simple request into a negotiation about value.

00:14:09: It forces them to see that your time is an infinite.

00:14:12: Marlon Davis points out that a lot of this confusion just comes from mixing up product management with project management.

00:14:18: A distinction that gets blurred all the time.

00:14:20: Marlon's clarification is simple.

00:14:22: Product owns the value in the direction.

00:14:24: What are we building and why?

00:14:26: Project owns the delivery.

00:14:27: When is it coming and how do we execute?

00:14:29: And

00:14:29: when the PM is forced to become a project manager,

00:14:32: you get products that ship perfectly on time under budget and then fail silently in the market because nobody actually wanted them.

00:14:38: We executed

00:14:39: the wrong plan perfectly.

00:14:40: You got

00:14:40: it.

00:14:41: Finally, Tim Herbig offers a piece of advice that I think applies to, well, everyone in this industry.

00:14:46: Drop the jargon.

00:14:47: Yes,

00:14:47: please.

00:14:48: We hide behind these complex terms.

00:14:50: Hypothesis validation.

00:14:52: Discovery sprints.

00:14:53: Tim says just use plain English instead of we need to run a validation experiment.

00:14:57: Just ask.

00:14:58: How can we find out if we should build this?

00:15:01: It's such a simple, powerful question.

00:15:03: It cuts through all the noise.

00:15:04: Yeah.

00:15:05: And really, I think that sums up the whole theme of these last two weeks.

00:15:08: How do you mean?

00:15:09: Well, think about what we've discussed.

00:15:11: AI evils that use binary yes or no judgments, product models that start with the why instead of the process, and language that drops the jargon.

00:15:22: It's all about a search for clarity.

00:15:24: We're moving away from the hype and trying to get to the hard reality of building things that are actually valuable.

00:15:29: That's a fantastic summary.

00:15:31: We've covered a lot of ground today from AI as systems of action to the dangers of vibe coding and the ongoing struggle for strategy in a founder mode world.

00:15:41: It's a complex landscape for sure, but I feel like the path forward is clearer than it's been in a while.

00:15:45: I couldn't agree more.

00:15:47: And for everyone listening, here's a final thought for you to chew on.

00:15:50: We talked about how AI is taking over the execution, the coding, the analysis, all the grunt work.

00:15:56: If the doing becomes automated, your value as a human is entirely in the deciding.

00:16:01: So the question is, are your decision making skills sharp enough to be your only marketable asset?

00:16:08: That is the uncomfortable question for twenty twenty six.

00:16:10: If you enjoyed this episode, new episodes drop every two weeks.

00:16:14: Also check out our other editions on ICT and tech, artificial intelligence, cloud, sustainability and green ICT, defense tech and health

00:16:21: tech.

00:16:22: Thanks for listening and don't forget to subscribe so you don't miss our next deep dive.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.