Best of LinkedIn: Digital Products & Services CW 04/ 05
Show notes
We curate most relevant posts about Digital Products & Services on LinkedIn and regularly share key take aways.
This edition collectively explores the maturation of the Product Operating Model and the structural integration of Artificial Intelligence into product workflows. Practical insights highlight the necessity of shifting from project-centric delivery to continuous product discovery, emphasising that true organizational alignment requires clear decision-making systems rather than mere adherence to frameworks. Expert contributors argue that AI functions as a catalyst for rapid learning and prototyping rather than a replacement for human judgment, demanding new competencies in managing AI agents, conducting error analysis, and distinguishing between generic outputs and functional designs. Leaders are urged to cultivate "product taste" alongside strategic sense, using metaphors to simplify complexity and moving beyond "managing up" to foster genuine team buy-in. The collection also provides tactical resources, including AI product playbooks, experimentation guides, and specific methodologies like the Toggle Method, to help teams navigate the technical and cultural demands of 2026. Ultimately, the texts advocate for a disciplined builder mindset that prioritises rigorous validation and problem framing over administrative process, ensuring that technology serves real user needs.
This podcast was created via Google Notebook LM.
Show transcript
00:00:00: This episode is provided by Thomas Allgaier and Frennus based on the most relevant LinkedIn posts about digital products and services in calendar weeks O four and O five.
00:00:10: Frennus is a B to B market research company helping enterprises gain the market customer and competitive insights needed to drive customer centric and cost efficient product development.
00:00:20: Welcome back to the deep dive.
00:00:22: We've got a A really interesting stack of sources today, covering late January and early February, twenty twenty six.
00:00:29: Yeah.
00:00:30: And looking through them, I feel like we're, I don't know, maybe turning a corner.
00:00:33: That's exactly it.
00:00:34: It feels like the conversation is finally shifting.
00:00:36: Shifting how?
00:00:37: Because for what, the last two years, it's just been this endless cycle of, look at this cool AI
00:00:42: demo.
00:00:42: Right.
00:00:43: Check out this magic trick.
00:00:44: And the sources from these past couple weeks, they suggest that the shiny new toy hangover is finally clearing up.
00:00:51: Okay, so we're moving from the magic to the mechanics.
00:00:53: The mechanics.
00:00:54: That's the perfect word for it.
00:00:55: It's less about, can AI do this?
00:00:57: And much, much more about, okay, how do we actually build a sustainable operational business with this?
00:01:03: Which I have to say is a relief.
00:01:05: But it also sounds like the hard part.
00:01:07: The plumbing.
00:01:08: It is the plumbing.
00:01:09: Yeah.
00:01:09: But that's where the value is.
00:01:11: We've got insights here on everything from, you know, vibe coding to the real nitty gritty of product ops.
00:01:17: And the common thread seems to be rigor.
00:01:20: Rigor.
00:01:21: Whether it's AI, leadership, or strategy, it seems like the free ride of just winging.
00:01:26: it is over.
00:01:27: Okay, so let's start with that term then.
00:01:29: Vibe coding.
00:01:30: I've seen it all over LinkedIn, all over Twitter.
00:01:33: It sounds like something you'd do in a hammock, not in a serious engineering department.
00:01:37: It does sound super casual, doesn't it?
00:01:39: I mean, in the general sense, vibe coding is this idea of using AI to generate code based on a loose idea of vibe.
00:01:47: You just say, make me a retro landing page and poof, it exists.
00:01:50: Exactly.
00:01:51: And that usually terrifies traditional engineers because the code underneath.
00:01:54: Towards
00:01:54: a mist.
00:01:55: It's a complete mess, right?
00:01:56: It might work, but you do not want to look into the hood.
00:01:59: It's unmaintainable.
00:02:00: Precisely.
00:02:01: But this is where we get our first really big insight.
00:02:03: It's from Lenny Rochetsky's newsletter.
00:02:05: And he was focusing on a workflow by Zivi Arnovitz.
00:02:08: And Arnovitz completely flips this idea on its head.
00:02:11: He does.
00:02:12: He argues that effective AI coding isn't vibey at all.
00:02:15: It's actually rooted in extreme structure.
00:02:19: And this guy, Zivi, he's a non-technical PM, but he's shipping code like a senior dev.
00:02:25: I was reading about his process.
00:02:27: He uses these things he calls slash commands.
00:02:29: It kind of reminded me of old school slack
00:02:31: commands.
00:02:32: It's exactly that.
00:02:33: So instead of treating the LLM like a chat buddy, he treats it like a junior developer who needs very, very specific management.
00:02:39: He has these preset commands like create issue or plan or even peer review.
00:02:44: Let's
00:02:44: pause on that because I think that's a huge shift for anyone listening.
00:02:47: when he types create issue what's actually happening behind the scenes.
00:02:50: So he's not just asking for code.
00:02:52: That command triggers this really rigid template in the AI's instructions.
00:02:57: It forces the AI to state the problem, identify the files, propose a solution before it writes a single line of code.
00:03:05: It stops it from just making something up.
00:03:06: It stops the hallucination.
00:03:08: But the one that really got me was peer review.
00:03:11: That's the killer feature.
00:03:12: Absolutely.
00:03:13: Because when you type peer review, you are forcing the AI to switch personas.
00:03:18: It stops being the builder.
00:03:20: who just wants to get it done.
00:03:21: And it
00:03:22: becomes the critic.
00:03:23: Exactly.
00:03:24: Which is such a smart workaround?
00:03:25: because these models, they're designed to be agreeable, you know, they want to please you.
00:03:30: They rarely say, wait, that's a terrible idea.
00:03:33: So you're basically encoding best practices right into the workflow, giving it a framework to critique its own work.
00:03:40: And here's the best part.
00:03:42: If the AI messes up.
00:03:44: He doesn't just hit regenerate and hope for a better dice roll.
00:03:46: No, he debugs the process.
00:03:48: Yes.
00:03:49: He asks the AI, what in my instructions caused you to make this mistake?
00:03:54: And then he updates the slash command so it doesn't happen again.
00:03:57: That
00:03:57: ties right into this whole slot problem.
00:03:59: There's so much buggy generic AI code out there.
00:04:03: And the inside is the slop isn't an AI problem.
00:04:06: It's a people problem.
00:04:07: Lazy prompts get lazy results.
00:04:09: Zevi actually calls his approach exposure therapy.
00:04:12: Exposure therapy?
00:04:13: Yeah.
00:04:14: He says to start with just a project context doc in chat GPT before you even touch a real coding tool.
00:04:20: You have to train yourself to give the right context first.
00:04:23: It's fascinating.
00:04:24: But you know, on one hand, we have this super structured text heavy workflow that's empowering builders.
00:04:29: On the other hand.
00:04:30: We have a source here, Manuel Silva, who's basically sounding the alarm.
00:04:34: This is such a necessary counterpoint.
00:04:36: He says we're seeing a forty-year regression in design.
00:04:39: Think about it, we spent decades moving away from the command line, you know, MS-DOS typing obscure codes to graphical interfaces, buttons, sliders, visual cues.
00:04:49: And now you open a so-called modern AI app.
00:04:53: And what is it, a blinking cursor in a text box?
00:04:55: Tell me what you want.
00:04:56: Right.
00:04:57: And Silva's point is that we're calling this futuristic, but for an average user, it's a huge cognitive load.
00:05:03: We're making our parents learn prompt engineering just to edit a photo.
00:05:07: It's kind
00:05:07: of lazy design, isn't it?
00:05:08: You're just delegating the work to the user instead of designing a clear button that says remove background.
00:05:13: It is the command line all over again, just with, you know, better autocorrect.
00:05:18: A great interface should show you what's possible.
00:05:21: An empty textbox just assumes you already know.
00:05:23: But wait, if we follow this thread to its logical end, maybe we get to a place where the interface doesn't matter at all because humans aren't even using it.
00:05:32: The no UI future.
00:05:33: Yeah, Akash Gipta had this prediction about agent to agent negotiation.
00:05:37: that kind of blew my mind.
00:05:39: The scenario he describes is so compelling.
00:05:41: Yeah.
00:05:41: Imagine you want a better interest rate on your savings account.
00:05:44: Ugh.
00:05:45: The worst.
00:05:45: I have to log in, navigate, twelve menus, maybe get on the phone.
00:05:49: In the agent to agent future, you just tell your personal AI, get me a better rate.
00:05:53: And your agent pings the bank's agent through an API.
00:05:56: So it's machine to machine.
00:05:58: No humans.
00:05:59: Strictly machine to machine.
00:06:01: They negotiate terms, compare offers, and just report back to you.
00:06:04: I got you four point five percent.
00:06:06: You never touched a screen.
00:06:07: There was no user experience, not visually anyway.
00:06:10: That have massive implications for product managers.
00:06:13: If I'm a PM at that bank, my whole world is button placement and color schemes in this future.
00:06:19: You're invisible.
00:06:20: If your product is just a pretty wrapper on a generic service, you're done.
00:06:25: PMs need to start writing PRD's product requirement documents for APIs.
00:06:31: How does my agent talk to your agent?
00:06:33: What are the negotiation parameters?
00:06:34: It shifts the whole discipline from like psychology of design to economics of APIs.
00:06:40: That makes me wonder about brand loyalty.
00:06:41: If my agent is just getting me the best math, how do you build a brand?
00:06:44: You can't charm an algorithm with a logo.
00:06:46: You can, which is a huge shift.
00:06:49: And it brings us to our second big theme.
00:06:51: the human element.
00:06:52: Because if AI is doing the coding and negotiating, what's left for us?
00:06:56: Judgment, taste.
00:06:58: That's the frontier.
00:06:59: Elaine Barsoom had a brilliant post on this.
00:07:01: She says, scaling with AI is a training problem, but not the way we usually think.
00:07:05: Right,
00:07:05: we're not talking about training the model on GPUs.
00:07:07: No, we have to train the humans.
00:07:10: And she makes this crucial distinction.
00:07:12: You train AI on signals.
00:07:14: Clicks, scrolls, purchases, behavioral data.
00:07:18: But you have to train humans on taste.
00:07:20: which is the judgment of what actually matters.
00:07:22: I love the phrase she used.
00:07:24: Humans retain decision authority and AI keeps input authority.
00:07:28: That's the whole game right there.
00:07:30: The AI gives you the inputs.
00:07:32: Hey, forty percent of users click this, but if you just let the AI make the decision based on that,
00:07:37: you get product bloat.
00:07:38: You get
00:07:39: bloat.
00:07:39: AI doesn't know how to say no.
00:07:42: It just optimizes the metric.
00:07:43: The human has to be the editor.
00:07:45: We have to look at the data and say, yes, users are asking for this, but it ruins the product's simplicity, so we're not doing it.
00:07:51: But taste is such a fuzzy word.
00:07:53: It feels subjective.
00:07:54: How do you pin that down?
00:07:55: Well,
00:07:56: this is where Atira Abdul-Ralph had a great piece distinguishing between product sense and product taste.
00:08:01: OK, this is useful.
00:08:02: Break that down.
00:08:03: So he defines product sense as knowing what to build, solving the right problem.
00:08:08: His example was Slack.
00:08:09: During the pandemic,
00:08:10: right?
00:08:10: Exactly.
00:08:11: Users were feeling isolated.
00:08:12: A purely functional PM or an AI would say, okay, people want to see each other.
00:08:17: Let's add a start zoom button everywhere.
00:08:19: Which
00:08:19: is just more meetings.
00:08:23: The last thing anyone wanted.
00:08:25: Right.
00:08:26: But someone with product sense realizes the problem isn't a lack of meetings.
00:08:30: It's a lack of a hallway.
00:08:32: That spontaneous low friction connection.
00:08:35: And so they build huddles.
00:08:37: That's product sense, so what's product taste?
00:08:39: Taste is about how it feels, the polish, the intuition.
00:08:43: Roth's example here was loom.
00:08:46: Loom noticed users were messing up their recordings and getting frustrated.
00:08:49: A standard solution would be just add a delay button.
00:08:53: Functional, but clunky, kills your flow.
00:08:56: But product taste is realizing the user has performance anxiety.
00:08:59: They're vulnerable.
00:09:01: So Loom adds a quick restart button right next to the stop button.
00:09:03: One click, wipes the take, restarts the countdown.
00:09:06: It
00:09:06: feels like the tool is on your side.
00:09:07: That's haste.
00:09:08: And you can't really prompt an AI to have that kind of empathy.
00:09:11: Not yet, anyway.
00:09:12: Which is why leadership has to change, too.
00:09:15: Roshan Gupta had a post about moving from a product manager to a product leader.
00:09:19: It's a tough transition.
00:09:20: As a PM, your whole value is being the best problem solver in the room.
00:09:24: But as a leader, you have to stop solving the problems yourself.
00:09:27: You have to become a builder of problem solvers, which is terrifying.
00:09:32: You have to let go.
00:09:33: You own the narrative, not the roadmap.
00:09:36: And speaking of narratives, Stephanie Liu had this painfully true post about the illusion of decisiveness.
00:09:42: Oh,
00:09:42: this one hits close to home for so many companies.
00:09:44: It's that moment in the boardroom, everyone agrees on the big bold vision, everyone nods.
00:09:48: The
00:09:48: slide deck looks amazing.
00:09:50: And then Monday morning comes.
00:09:51: And sales is chasing a deal that contradicts the vision.
00:09:55: Engineering is drowning in tech debt, and marketing is off on a tangent.
00:10:00: Everyone is aligned on the vision, but totally misaligned on execution.
00:10:04: And
00:10:04: Lou points out it's the gray dots that kill you.
00:10:06: The
00:10:07: gray dots.
00:10:08: Can we just add this request?
00:10:12: The favor for a big client.
00:10:14: The things that aren't on any roadmap, but they suck up forty percent of the team's energy.
00:10:18: Okay,
00:10:18: so how do we fix that?
00:10:19: This moves us into our third theme, right?
00:10:21: Operations, discovery, the nitty gritty.
00:10:24: The answer seems to be product ops.
00:10:27: It is.
00:10:27: And for a long time, product ops was seen as just, you know, bureaucracy.
00:10:31: More paperwork.
00:10:32: Right.
00:10:33: But Phil Evans makes this great comparison to DevOps.
00:10:36: We don't see DevOps as bureaucracy.
00:10:39: We see it as essential infrastructure for shipping code.
00:10:42: ProductOps is the infrastructure for shipping decisions.
00:10:45: It's a tooling and process that lets the PM focus on strategy instead of fighting fires.
00:10:50: And Ed Biden added some really practical advice here.
00:10:52: He says most leaders have no idea if a team is off track until it's too late.
00:10:56: The big launch day surprise.
00:10:58: The
00:10:58: worst surprise.
00:11:00: So he suggests a four step review rhythm, kickoff, solution review, launch readiness and impact review.
00:11:07: But the one most teams skip is the solution review.
00:11:10: And why is that one so critical?
00:11:12: because it happens before a line of code is written.
00:11:14: It's when the solution is just a sketch, a prototype, that it's the cheapest possible time to fix a misunderstanding.
00:11:20: Before
00:11:21: you have three weeks of engineering sunk cost and emotional attachment to the code.
00:11:25: Exactly.
00:11:26: But all this process can't come at the expense of the customer.
00:11:29: Omar Salem had this great quote, a product idea without validation is just a well-packaged guess.
00:11:35: I love that.
00:11:35: Validation isn't a phase you check off, it's a principle.
00:11:38: You have to challenge your assumptions.
00:11:40: not just ask users what they
00:11:42: want.
00:11:42: And speaking of challenging assumptions, Michael Goitin shared the strategy test called the toggle method.
00:11:47: I thought this was a brilliant mental model.
00:11:50: It's a tough one.
00:11:51: Yeah.
00:11:52: So every strategy have a where to play your customers and how to win your differentiator.
00:11:57: Okay.
00:11:58: Goitin says to take your how to win and toggle your where to play.
00:12:03: to a competitor's customer base.
00:12:05: Let's try it.
00:12:06: Say my where to play is creative agencies and my how to win is a simple intuitive interface.
00:12:13: Okay, now toggle the customer.
00:12:14: Let's say accountants.
00:12:16: Does simple intuitive interface work as a differentiator for accountants?
00:12:20: Well,
00:12:21: yes.
00:12:22: Everyone wants a simple interface.
00:12:23: Exactly.
00:12:24: Your strategy failed the test.
00:12:26: If your differentiator works just as well for a totally different segment, it's not a differentiator, it's just table sticks.
00:12:32: So a real strategy would be something like, we visualize timelines in a way that only makes sense for video production workflows.
00:12:38: Yes.
00:12:39: Because if you showed that to an accountant, they would hate it.
00:12:42: That means you've made a real choice.
00:12:43: You've sacrificed one market to win another.
00:12:46: I feel like a lot of PowerPoint strategies would fail that test.
00:12:49: Most
00:12:49: would.
00:12:50: Okay.
00:12:50: Before we wrap up, I want to get to that technical nugget you found.
00:12:53: This is for the e-commerce folks listening.
00:12:54: It's from John Roberts about how LLMs like ChatGPT actually see products on Shopify.
00:13:00: Right.
00:13:01: This is a perfect example of mechanics over magic.
00:13:04: We just assume AI knows things.
00:13:06: But Roberts explains that these LLMs, they often prioritize reading the body thermal of a product page.
00:13:13: And they do not necessarily read the SEO meta descriptions we've all spent years optimizing.
00:13:18: That's
00:13:18: huge, right?
00:13:19: The meta description is often just marketing fluff.
00:13:21: The body terminal is where the rich details are, materials, the use cases, the siding.
00:13:25: That is the data the LLM needs to answer a real question.
00:13:28: So
00:13:28: the advice is, stop writing for a search engine spider from twenty fifteen.
00:13:33: Write detailed, rich descriptions in the main body text.
00:13:37: If you want an AI to recommend your shirt when someone asks, what's a good shirt for a hot climate?
00:13:43: The answer, one hundred percent organic, cotton, breathable weave, needs to be in that body text.
00:13:50: If it's buried in a metatag, the AI might just miss it.
00:13:53: It all comes full circle back to the beginning.
00:13:55: Whether it's ZV's slash commands or writing product descriptions, you have to understand how the system actually works, not just how you wish it did.
00:14:04: The era of vibes is ending.
00:14:06: The era of understanding the system is here.
00:14:08: So if you're listening, I guess the question to ask yourself is, Are you building for the shiny toy version of the world?
00:14:14: Or are you building for the reality of agents, rigor, and structured workflows?
00:14:18: And maybe one final thought to leave with.
00:14:19: We talked about taste being this uniquely human skill.
00:14:22: But as we get better at encoding our preferences into these systems, into these slash commands, are we just teaching the AI to have taste?
00:14:29: At what point does the human element just become another dataset we hand over?
00:14:33: That is a slightly terrifying thought to end on.
00:14:36: but a necessary one.
00:14:38: If you enjoyed this episode, new episodes drop every two weeks.
00:14:41: Also check out our other editions on ICT and tech, artificial intelligence, cloud, sustainability, and green ICT, defense tech, and health
00:14:49: tech.
00:14:50: Thanks for listening.
00:14:51: Don't forget to subscribe for more insights from the next calendar week.
00:14:53: See you then.
New comment