Best of LinkedIn: Digital Products & Services CW 16/ 17

Show notes

We curate most relevant posts about Digital Products & Services on LinkedIn and regularly share key take aways. We at Frenus support enterprise product teams with feature-by-feature competitive intelligence, enabling them to clearly understand how their products stack up against competitors and make data-driven product decisions. You can find more info here:https://www.frenus.com/usecases/product-feature-benchmarking-and-sales-battle-cards-know-exactly-where-you-win-where-you-lose-and-why

This edition explores the evolving product management landscape in 2026, specifically highlighting the transition toward AI-native operating models. Industry experts argue that foundational skills like meta-prompting and context management are now more critical than simple access to technology. Successful organisations are shifting from feature-centric roadmaps to outcome-based strategies, where AI agents automate delivery tasks to free up humans for strategic judgment and problem selection. Despite the speed of AI-generated prototypes, leaders caution that "product purism" is failing, and value is moving toward human-led insights and rigorous risk management. Furthermore, the texts examine how recruitment and training must adapt as AI lowers the floor for technical execution while raising the ceiling for product taste. Collectively, these insights suggest that while AI multiplies output, the core of product work remains the art of subtraction and solving the right customer problems.

This podcast was created via Google NotebookLM.

Show transcript

00:00:00: This episode is provided by Thomas Allgaier and Frenis, based on the most relevant LinkedIn posts about digital products in services.

00:00:06: In calendar weeks, sixteen and seventeen.

00:00:09: Frens is a B to be market research company that supports enterprise product teams with building feature-by-feature competitive intelligence that shows exactly how their product stacks up against the competition.

00:00:20: you can buy more info in the description so I want you picture this figma right like one of the most celebrated design platforms.

00:00:29: we have Just watch as its stock plummet, eighty four percent.

00:00:33: Yeah which is just insane right

00:00:35: from like one hundred and twenty two dollars down to under nineteen in less than a year.

00:00:40: And the wild part it wasn't because their software broke or anything.

00:00:43: It was because they treated AI As uh...just A cool new feature To bolt onto there existing product rather Than you know Fundamentally rethinking how products are built

00:00:52: Exactly!

00:00:53: That staggering I mean perfectly captures The transition happening across the whole industry Right now.

00:00:58: So If there's a core mission for our deep dive today, looking at all these curated insights from the people actually building this tech.

00:01:07: It is The era of like AI enthusiasm is totally over.

00:01:12: We have firmly entered the era of AI operating discipline because, you know for the last couple years simply getting a large language model to just generate some text or code that was enough to impress people.

00:01:23: Yeah...the

00:01:23: party trick phase

00:01:24: Exactly.

00:01:25: But today raw output speed is just a commodity!

00:01:28: The true competitive advantage now lies in well superior human judgment ruthless strategy and entirely redesigned workflows.

00:01:37: So if we're talking about redesigning workflows, I mean... We really have to start with the people running them.

00:01:40: Because If you scroll through social media lately You are just bombarded With these apocalyptic hot takes claiming The product manager role is completely dead.

00:01:49: Oh yeah!

00:01:49: The PM Is Dead.

00:01:50: posts They're everywhere

00:01:52: Everywhere!!

00:01:52: The narrative is basically Since AI can write code and design user interfaces Now..The PM's Just Expensive collateral damage.

00:02:00: Yeah But Well.

00:02:02: But it's a narrative engineered for maximum engagement, right?

00:02:05: It willfully misinterprets the reality on the ground.

00:02:08: That was

00:02:08: so

00:02:09: Well.

00:02:09: Kevin Thomas recently pointed out this fascinating counter signal.

00:02:13: So Anthropic The Company actually building the Claude model.

00:02:17: They just published this master class video On their official channels And...the target audience For that video?

00:02:23: Let me guess

00:02:24: product managers.

00:02:25: Wait,

00:02:25: really?

00:02:26: Yeah they were specifically showcasing how PMs can use Claude for advanced workflows.

00:02:32: I mean if the very organization building-the frontier AI models views Product Managers as their absolute core power users The profession isn't dying it is just going through this massive evolution

00:02:43: that makes so much sense.

00:02:44: but That uncomfortable truth Is something a lot of folks Just don't want to face right now and It connects perfectly To an observation Intermargill ad shared

00:02:52: Right about the product owner role.

00:02:53: Exactly.

00:02:54: The traditional product owner, like the person whose entire professional identity is tied to just managing backlogs moving jura tickets around writing up those basic user stories that job...is gone.

00:03:04: Totally

00:03:05: gone!

00:03:05: Because an AI can parse meeting notes and write a comprehensive product requirements document.

00:03:10: for what?

00:03:11: A fraction of ascent in about ten seconds?

00:03:13: Yeah

00:03:13: basically instantly Right.

00:03:15: So if your sole contribution is producing administrative artifacts you are competing against a machine That literally does not sleep.

00:03:24: The survival path is becoming this modern PM who actually drives discovery and makes strategic tradeoffs.

00:03:31: Yeah, the artifacts are no longer at the product of the job.

00:03:34: And you know when you realize that the day-to-day administrative tasks are automated it forces You to look at the larger organizational structure which

00:03:41: a huge pivot

00:03:42: It is, which is exactly why the concept of The Product Operating Model or POM Is just dominating conversations right now.

00:03:49: Michael Broad and Simon Powers were very clear in their recent post that a product operating model was not just you know Agile repackaged with a shiny new acronym.

00:03:56: Right it's Not Just A Rebrand

00:03:58: No!

00:03:58: It is fundamental rewiring how a company actually turns an idea into customer value.

00:04:04: I

00:04:04: really appreciated broads distinction there because so many companies think they're agile.

00:04:10: A fifteen minute stand-up meeting every morning.

00:04:12: Oh,

00:04:12: absolutely.

00:04:13: But

00:04:13: let's build out a visual for this because I think it helps.

00:04:16: Think of traditional agile as tuning the engine of a single car.

00:04:19: Okay

00:04:20: You can make that one car or that one team run incredibly fast but if That sports car is driving on a crumbling highway system filled with toll booths and those toll booth represent like annual budgeting cycles massive legal approvals siloed handoffs at Speed doesn't matter at

00:04:37: all.

00:04:37: Right, you're still sitting in traffic?

00:04:39: Exactly!

00:04:39: You are stuck.

00:04:40: So the product operating model is really about demolishing those toll booths and redesigning entire highway system so that whole organization can handle speed of car.

00:04:50: That's a great way to look at it.

00:04:51: But demolishing toll booth where things get dangerous if don't know what your doing

00:04:56: Oh for sure.

00:04:57: Stephanie Liu shared this brilliant piece of grounding advice.

00:05:02: She mentioned a chief product officer who messaged her late at night and just sheer panic.

00:05:07: Uh-oh,

00:05:08: why?

00:05:08: Well he had been reading all these confident sweeping posts on LinkedIn about how the elite one percent of product organizations operate places like Google or Stripery.

00:05:18: Yeah And he was ready to just tear down his own perfectly functional profitable workflows Just to mimic them.

00:05:25: Wow

00:05:25: the classic trap of copying The Tech Giants.

00:05:28: Exactly, The Trap!

00:05:30: Stephanie pointed out that product operating models fail spectacularly when organizations blindly copy those elite playbooks instead of diagnosing their own reality...

00:05:39: Because the context is completely different.

00:05:41: Right your tech debt is different Your regulatory environment is different You're customer base is different.

00:05:46: There's just no canonical one-size fits all form of product management.

00:05:51: you have to read the context directly in front of and make localized judgment calls.

00:05:55: So if we accept that the organizational highway needs to be rebuilt based on our unique context, that leaves The Product Builder with a massive blind spot right?

00:06:04: Which is the actual tools and methods they use every single morning?

00:06:07: because you cannot.

00:06:13: The day-to-day mechanics of creating software are completely shifting.

00:06:17: They really are, we're seeing this hard pivot from AI being like a bolted on feature at the end of a product cycle to AI being the actual foundation of discovery design and prototyping phases

00:06:30: And Dan Thomas had shared an incredible way to conceptualize that shift.

00:06:34: He argues Traditional software engineering was basically like painting on a blank campus.

00:06:39: Okay, I liked that

00:06:40: yeah as purely additive process.

00:06:41: right every single feature Every button every line of logic.

00:06:44: Was it deliberate stroke?

00:06:45: Of the brush?

00:06:46: if an engineer didn't explicitly write It just didn't exist.

00:06:49: Right but building a product with AI natively is like starting With a ten ton block of raw marble.

00:06:55: Oh, wow.

00:06:56: Yeah because the large language model already contains the knowledge to do essentially everything.

00:07:00: It has a capacity to build the experience sure but it also has the capacity To hallucinate entirely fabricated data or you know create user paths that are actively damaging

00:07:11: right?

00:07:12: The model just over delivers by default.

00:07:14: yeah So the fundamental job of the product builder is no longer addition.

00:07:18: It is the art of subtraction.

00:07:20: You have to take out your chisel Which in this case is like your system prompt.

00:07:24: Yeah, you're rigid air handling rules Your UX guardrails and You have to carve away all the hallucinations And the infinite irrelevant capabilities

00:07:33: until only The actual product Is left

00:07:35: yes?

00:07:36: You chisel away the noise Until only that hyper specific reliable Product is standing there.

00:07:41: because if you fail To subtract effectively you aren't shipping a product you are literally just dumping A dangerous rock on your user's desk.

00:07:49: That is such a vivid analogy.

00:07:50: And to see how that subtractive process alters the way teams communicate, you just have look at example Sachin Kunzal shared about Uber.

00:07:57: Well what are they doing?

00:07:59: They're letting AI-generated functional prototypes organically replace traditional written product requirements document.

00:08:06: Think of it as friction.

00:08:08: in old ways Teams would spend three weeks just debating a static written document describing how features should behave.

00:08:15: Now, they use an internal AI tool to instantly spin up the working prototype of that feature!

00:08:21: That changes the entire review dynamic completely because instead The theoretical interpretation of some paragraph on page four.

00:08:30: Yeah, you're actually clicking a button and saying hey this transition feels clumsy or This beta output is wrong.

00:08:37: You are critiquing a tangible thing right?

00:08:40: it accelerates alignment dramatically because the ambiguity has just gone And Catherine Wu from Anthropic elaborated on the results of this kind of acceleration in A recent interview with Lenny Rachitski.

00:08:51: I always saw that.

00:08:52: She noted that Anthropic has compressed their product development timelines from six months down to mere days, days.

00:09:00: they achieved this velocity through an evergreen launch room where engineers just post ready-to-go features and by shipping these features as research previews.

00:09:08: naming in a preview

00:09:09: yeah labeling it an experiment drastically lowers the stakes.

00:09:13: It allows the team to ship rapidly and let real market usage validate the idea instead of, you know endless internal committees.

00:09:22: Okay

00:09:22: wait I'm stuck on this anthropic example though because shipping a feature in a matter of days sounds like a total nightmare for quality control.

00:09:29: it definitely can be right.

00:09:31: i mean if we go back to the raw marble analogy yeah If your chiseling that fast aren't you just shipping jagged broken rocks?

00:09:39: Moving fast with AI is the easiest thing in.

00:09:41: He

00:10:04: maps this out as a ten-eighty-ten split in the workflow.

00:10:08: Okay, break that down for

00:10:09: me!

00:10:09: So the first ten percent is human setup That's gathering the nuanced business context and crafting the architecture of the prompt.

00:10:15: The middle eighty percent Is AI actually doing heavy lifting Generating code or analysis Whatever?

00:10:22: But final ten percent Human review and refinement.

00:10:26: And where everything falls apart if you don't know what your'

00:10:29: Exactly.

00:10:30: Because Murphy makes a point that junior employees often lack the intrinsic taste to steer the AI's output, right?

00:10:37: Like if you hand the exact same prompt To a seasoned product leader and a junior manager The final output will be vastly different

00:10:45: Vastly different because the senior leader knows what excellent actually looks like.

00:10:50: They know how to critique the AI and force it to iterate Right.

00:10:54: And David Marshall quantified this reality with a pretty sobering statistic.

00:10:58: He's mentored hundreds of professionals adopting AI, and his data suggests that while eighty-eight percent of knowledge workers actively use AI tools perhaps only six percent are generating actual measurable business

00:11:09: value.".

00:11:10: Only six percent actually driving value?

00:11:12: That gap is astonishing!

00:11:13: "...is a massive skills deficit.

00:11:15: the deficit isn't about like knowing which buttons to click in the interface.

00:11:19: it comes down to foundational cognitive which is prompting the AI to write better prompts for itself and managing long context windows.

00:11:29: But Marshall's biggest differentiator, what he calls minimal viable quality or MVQ.

00:11:37: The average person uses an LLM like a search engine right?

00:11:40: A type of vague request.

00:11:41: just blindly accept that first thing.

00:11:44: it spits out exactly.

00:11:46: but the elite six percent explicitly define the MVQ criteria before they even open.

00:11:53: They know the exact standard output must meet before it's allowed into code base or customer pipeline.

00:11:59: Let's ground that in reality, because quality isn't just an aesthetic debate.

00:12:02: I mean sometimes and AI generating a bad answer is just slight annoyance but other contexts it is catastrophic event.

00:12:08: Absolutely.

00:12:09: Pujon Chavary introduced the concept of error severity which critical for product teams to understand.

00:12:16: Historically machine learning team focused on aggregate error rates.

00:12:20: they would proudly announce hey our model is ninety-eight percent accurate across all tasks

00:12:24: Right!

00:12:25: The classic metric.

00:12:26: But Chavary points out that in an AI-native world, aggregate accuracy is a dangerously misleading metric.

00:12:34: Error severity mapped to the specific use case it's only thing actually matters

00:12:38: Because context dictates damage.

00:12:41: Exactly!

00:12:41: If you deploy an AI that has two percent error rate when recommending comedy movie on streaming app literally nothing happens.

00:12:48: the user just scrolls past it.

00:12:50: Yeah, no harm done!

00:12:51: But

00:12:51: if you deploy an AI with a two percent error rate in a banking app approving mortgage applications or like uh medical app interpreting blood test results that two percent translates to massive legal liability and ruin lives.

00:13:05: Yes

00:13:06: use case judgment is the most underrated skill in tech today.

00:13:09: It's the ability to look at a proposed future And ask okay what are the second-and third order consequences when AI inevitably hallucinates here?

00:13:17: which perfectly leads us to how the smartest teams are actually deploying these models today, to mitigate that exact severity risk.

00:13:23: Because they aren't building these omnipotent god-like AI assistants who try and do everything for their users.

00:13:29: They're boxing the AI into highly specific bounded workflows through intelligent agents.

00:13:36: Right!

00:13:36: And Miko Arro offered a fantastic guiding principle of this – he basically said start...

00:13:43: Embarrassingly small.

00:13:44: Yeah, because teams waste months trying to build this massive fifteen-step AI framework that revolutionizes the entire product lifecycle and it just inevitably collapses under its own complexity.

00:13:56: RO suggests building an agent does one incredibly boring repetitive job

00:14:01: Like what?

00:14:01: For example, build a script that securely pulls data from your JIRA instance every Friday and just drafts a clean bulleted quarterly achievement summary.

00:14:10: Well it's brilliant!

00:14:11: It takes minutes to build instead of quarters.

00:14:13: the error severity is incredibly low And most importantly...it builds psychological trust with your engineering team when they actually see work flawlessly

00:14:20: Exactly..and accessibility in these tools is really democratizing this trust.

00:14:25: Anis Lorenzo highlighted a Claude co-work plugin as the perfect micro example of this.

00:14:30: A product manager can install it in under three minutes without even asking IT for permission, they configure once and every Monday morning at eight am autonomously scrapes public data drops one page competitive research brief into a shared folder.

00:14:45: Wow!

00:14:46: It highlights five competitor moves flags two alternative products you're losing deals to and tags the severity.

00:14:54: It requires zero massive integrations, no strategic roadmap – it's just one reliable agent executing a task humans usually forget to do anyway!

00:15:03: And seeing how easily we can delegate these tasks changes the whole strategic question.

00:15:08: PMs need to ask, doesn't it?

00:15:09: Because for the last two years.

00:15:10: The default question was always well How do we add an AI chatbot to our existing dashboard.

00:15:15: Oh yeah Anusha Kaith reframes that entirely.

00:15:17: Yeah...The best product leaders are no longer asking how To Add AI.

00:15:21: They're asking which of Our core workflows Can an AI completely own?

00:15:24: That is a profound difference.

00:15:26: It

00:15:26: Is!

00:15:26: because adding AI to an interface might make the user like ten percent faster.

00:15:30: But when an AI owns A workflow Like KYC For instance,

00:15:33: let's define that really quickly.

00:15:34: for anyone who might not know KYC or Know Your Customer is That heavily regulated process where banks verify the identity of their clients to prevent fraud.

00:15:44: It traditionally requires massive teams Of humans manually reviewing uploaded passports and utility bills.

00:15:50: exactly so when you deploy an AI agent To completely own that KYC resolution verifying The documents cross-referencing global databases and flagging anomalies autonomously, you aren't just optimizing a button on the screen.

00:16:05: You are eliminating massive operational bottleneck in completely redefining the economics of

00:16:10: business.".

00:16:10: And people executing this operational shift isn't necessarily who.

00:16:15: Akash Gupta shared an update that genuinely blew my mind.

00:16:18: What was it?

00:16:18: We're seeing non-technical product managers like people who couldn't write a Hello World script in Python six months ago, using tools like Clawed Code to ship fully functional internal applications in a single afternoon.

00:16:31: Wow!

00:16:31: The leverage they are gaining is just unprecedented.

00:16:34: It really is.

00:16:35: They Are Building Evil Loots which are essentially automated testing cycles that constantly grade the AI's answers to prevent drift and they're setting up background jobs, which are automated server tasks running continuously behind-the-scenes.

00:16:50: So there doing complex systems engineering without actually writing syntax?

00:16:53: Exactly!

00:16:54: And Gupta noted that market is rewarding this ruthlessly.

00:16:58: The compensation bands for PMs operating at this frontier between two and two-and a half million dollars.

00:17:05: Two million dollars?

00:17:06: Yeah,

00:17:06: not because they suddenly learned to code but because they realize that don't have wait three weeks for an engineering sprint to validate internal tool anymore.

00:17:14: They can bypass the bottleneck test theory on Tuesday scrap it Wednesday And deploy winner Thursday.

00:17:21: That is wild!

00:17:22: But you know...that level of blistering speed comes with severe warning label.

00:17:26: Oh absolutely Moving

00:17:27: that fast creates a massive blast radius if you are heading in the wrong direction.

00:17:33: All of these single-day prototypes, autonomous agents and incredible personal leverage are entirely useless and potentially destructive... ...if they aren't anchored to a ruthless

00:17:44: strategy.".

00:17:45: And Rohan Singh laid down what I think should be golden rule for this era.

00:17:49: He stated, "...products' strategy is art of sacrifice...".

00:17:53: The Art Of Sacrifice?

00:17:55: I love that!

00:17:55: It's brilliant framing Because most corporate roadmaps are really just disguised wish lists, right?

00:18:01: Teams spend weeks detailing exactly what they intend to build but completely fail to articulate what they're explicitly refusing.

00:18:09: Right!

00:18:09: Trying to please everyone.

00:18:10: Exactly.

00:18:11: Seng argues that if you look at your roadmap and it doesn't contain the least one abandoned feature that makes you physically nauseous to leave behind You do not have a strategy...you just have a prioritized list.

00:18:21: The art of sacrifice is required because the traditional static roadmap, you know that Excel sheet listing out features for next four quarters.

00:18:31: That's dead!

00:18:32: Larklin Morley and Harmeet Nandre emphasize that pace of technological change makes feature-led roadmaps obsolete before they are even published.

00:18:40: So what's an alternative?

00:18:42: You must shift to outcome based road maps.

00:18:45: Oh okay.

00:18:46: so instead telling your team hey We're going to launch a new recommendation carousel in Q three.

00:18:52: You mandate the outcome like we are gonna reduce subscriber churn by fifteen percent?

00:18:57: Yes, exactly.

00:18:58: and you let the AI tools group the work and suggest themes.

00:19:02: but The human leadership firmly sets the metric

00:19:04: right because anchoring Toa business outcome forces the executive stakeholders to align.

00:19:08: it removes the petty debates over personal feature preferences And forces everyone to answer just one question does this specific bet move the revenue needle?

00:19:16: What happens when an organization fails to make those strategic sacrifices?

00:19:20: Well, let's return to our opening hook.

00:19:22: The catastrophic eighty-four percent drop in Figma stock valuation.

00:19:27: Jin Lele did a deep dive into the mechanics of their failure and it is honestly a masterclass in missing and operating model shift because chat GPT became a global phenomenon four years ago.

00:19:39: so the AI wave wasn't a sudden ambush.

00:19:42: right

00:19:42: now everyone saw coming

00:19:43: but figmas internal structure was entirely optimized to refine their existing paradigm to infinity.

00:19:49: They fell into the trap of addition rather than subtraction.

00:19:53: Deeply, Loy pointed out a critical structural flaw.

00:19:56: Figma over-indexed on product managers orchestrating the roadmap and in doing so they marginalized their design discipline.

00:20:03: Okay but why does that specifically ruin an AI native pivot?

00:20:07: Because PMs are generally incentivized to ask how do we fit this?

00:20:10: new technology?

00:20:13: Designers, on the other hand are trained to ask what is the core intent of the user and how do we remove friction?

00:20:19: To get them there.

00:20:21: Because Figma sidelined a design perspective no one with real authority asked The critical question when Do We pivot to an intent-based interaction model?

00:20:30: And An Intent based Model Changes Everything.

00:20:33: Instead of the user dragging and dropping shapes on a canvas for hours, they simply type their intent into a prompt like generate high-converting checkout page from a sneaker brand.

00:20:44: And

00:20:46: Figma was blind to that shift – it got outflanked by entirely new startups allowing users to generate UI with natural language!

00:20:54: The market brutally demonstrated that it just doesn't care how elegant your current interface is.

00:21:00: It only cares if you organizational structure, Is capable of anticipating the next

00:21:04: disruption.".

00:21:12: A strategic human judgment is the only true differentiator left.

00:21:24: How do companies actually identify it in the hiring process?

00:21:27: Mirella Mus highlighted a massive crisis and the twenty-twenty six hiring market, The pipeline has completely choked with AI slop.

00:21:35: AI slop Wow that's such a visceral term for what we're seeing in recruitment

00:21:39: right now.

00:21:40: It perfectly describes it, though.

00:21:42: Its the flood of artificially generated resumes cover letters and take-home technical assignments.

00:21:48: Candidates are using these sophisticated agents to submit applications that look utterly flawless on paper but entirely mask their lack of foundational depth.

00:21:58: It's impossible to parse!

00:21:59: Yeah...it has become mathematically impossible to judge a professional's competence through asynchronous written tasks.

00:22:06: So the entire recruitment playbook has to change.

00:22:09: How are the smart companies filtering out the slop?

00:22:11: Well, they're abandoning the take home assignments entirely really yeah.

00:22:16: instead there forcing candidates into live high-pressure real time simulations They put you in a room virtual or physical hand hand you an ambiguous business problem.

00:22:25: Give you access to all of AI tools You want and just watch your

00:22:30: work!

00:22:31: Exactly.

00:22:31: They need to see how your brain processes context, they are evaluated and whether you use the AI as a crutch to cover up your lack of strategic vision or if you use it at high-speed chisel to amplify your own rigorous logic.

00:22:45: so Your ultimate takeaway is you navigate this landscape in an economy where algorithm can perfectly simulate your competence.

00:22:52: on pdf real time adaptability taste critical thinking live room.

00:22:57: only defensible assets left.

00:23:00: If you enjoyed this episode, new episodes drop every two weeks.

00:23:03: Also check out our other editions on ICT and Tech Artificial Intelligence Cloud Sustainability in Green ICT DefenseTech & HealthTech.

00:23:10: Thank-you so much for joining us For This Deep Dive.

00:23:12: Make sure you hit that subscribe button And we will catch You On The Next One.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.