Best of LinkedIn: Health Tech CW 04/ 05
Show notes
We curate most relevant posts about Health Tech on LinkedIn and regularly share key takeaways.
This edition examines the strategic integration of artificial intelligence and digital innovation in healthcare as of early 2026, marking a clear shift from experimentation to execution. The contributors highlight how AI-driven triage, non-invasive glucose monitoring, and modular robotic surgery are moving medical practice toward more proactive, augmented, and efficient care models. However, the texts emphasise that successful adoption requires robust reliability, strong “clinical DNA” within product teams, and a reduction in pilot churn to prevent clinician burnout and erosion of trust. There is a strong focus on improving patient empowerment through foundational health habits rather than technology alone, while also addressing significant risks such as the perceived weakening of software regulation enforcement and the challenges posed by the EU AI Act. Industry leaders from GE HealthCare, Philips, and Siemens Healthineers detail concrete advancements in photon-counting CT, community-based MRI access, and collaborative Alzheimer’s care initiatives. Ultimately, the collection portrays a sector at a critical turning point, moving from experimental pilot projects to scalable, intelligence-led infrastructure.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This episode is provided by Thomas Allgeier and Frennis based on the most relevant LinkedIn posts on health tech in CW four and five.
00:00:08: Frennis equips product and strategy teams with market and competitive intelligence to navigate the health tech landscape.
00:00:14: It is great to be back and you know looking at the intelligence stack from calendar weeks four and five there's a definite shift in the air.
00:00:22: It feels
00:00:22: heavier doesn't it?
00:00:23: Less about the shiny object and more about well the heavy lifting.
00:00:27: Precisely.
00:00:28: If last year was the year of the demo, where everyone was showing off what was theoretically possible, the start of twenty.
00:00:34: twenty-six is really shaping up to be the era of the execution frontier.
00:00:38: The industry is rolling up its sleeves.
00:00:40: It is.
00:00:41: We're seeing this clear shift from, you know, look what this AI can do to, does this actually work when things get messy in a real hospital?
00:00:48: And that's the mission for this deep dive.
00:00:50: We're stripping away the hype to see what's actually sticking.
00:00:52: We've got what, four main clusters to get through?
00:00:55: Yep.
00:00:55: First, we'll dissect the reality gap in AI diagnostics, specifically why these magic demos are failing in clinical practice.
00:01:02: Then we're scrubbing into the operating room.
00:01:04: Right.
00:01:04: Looking at why robotics is moving from a hardware conversation to a workflow conversation.
00:01:10: We'll also take a hard look at the consumer side wearables, glucose monitoring, and the harsh truth that, you know... data doesn't equal behavior change.
00:01:19: And finally, we have to talk about the energy vampires and strategy.
00:01:23: We'll look at why pilot programs are failing and a massive regulatory blind spot that was frankly exposed this week.
00:01:31: It's a dense stack.
00:01:32: So let's get right into it.
00:01:33: Theme one, AI and care delivery.
00:01:36: And right away, there is this This real tension between marketing promises and clinical reality.
00:01:42: It's the network to product gap.
00:01:45: And nothing showed this better this week than a story shared by Yongcha, the former Apple Health exec.
00:01:50: Oh,
00:01:50: I saw this one.
00:01:51: He highlighted
00:01:51: a really telling experiment and involved a reporter from the Washington Post and Chad GPT Health.
00:01:56: This is the one where the reporter fed ten years of his Apple Watch data into the system, right?
00:02:00: Correct.
00:02:01: Ten years of heart rate, sleep, activity data.
00:02:04: And he asked the AI a simple question.
00:02:06: grade my cardiac health.
00:02:08: The AI came back and gave him F.
00:02:09: An F?
00:02:10: Wow.
00:02:11: That implies imminent failure.
00:02:13: Which
00:02:13: is terrifying.
00:02:14: Especially since his actual human cardiologist had told him he was perfectly fine.
00:02:18: Okay, but here's
00:02:19: the kicker.
00:02:19: Here's the kicker.
00:02:20: The part that really matters for our listeners building these tools.
00:02:24: The reporter asked the AI again.
00:02:27: Same data.
00:02:28: This time.
00:02:28: It gave him a C.
00:02:29: A C?
00:02:30: He asked
00:02:30: a third time.
00:02:31: It gave him a B.
00:02:31: Hold on.
00:02:32: Same inputs.
00:02:33: Three wildly different outputs ranging from you're failing to you're doing great.
00:02:38: That's not a diagnostic tool.
00:02:39: That's that's a random number generator.
00:02:42: That is exactly Young Chae's point.
00:02:44: He argues that general purpose large language models lack what he calls native priors for trend estimation.
00:02:50: Unpack that for us native priors.
00:02:52: Think of it as context or a baseline.
00:02:55: A human doctor has a prior understanding of what's a normal fluctuation versus a dangerous anomaly for you based on your history.
00:03:02: Okay.
00:03:02: The LLM is just pattern matching against a huge generic training set.
00:03:06: Doesn't know you, it's guessing.
00:03:08: So, Charles basically saying we have a reliability crisis.
00:03:12: It's easy to make a demo that gets the right answer once for an investor deck.
00:03:16: But it's incredibly hard to build a product that's reliable enough to trust with the diagnosis.
00:03:21: Reliability is the currency.
00:03:23: And if an AI tells a healthy patient they're dying, that's not just an error, it's actual harm.
00:03:29: It is.
00:03:29: It creates anxiety.
00:03:30: It wastes clinical resources and it erodes trust.
00:03:34: But we did see the flip side of this in the feed.
00:03:36: Taha cast out shared a scenario where AI is working.
00:03:39: This is the stroke case.
00:03:41: Yes.
00:03:41: A patient arrives in the ER, right side paralyzed.
00:03:44: The initial CT scan looks normal to the human eye.
00:03:47: But minutes later, an AI model reviewing that same scan flags a subtle vessel irregularity.
00:03:53: And
00:03:53: that flag got the team to reevaluate and treat.
00:03:56: Exactly.
00:03:57: But notice the difference.
00:03:58: In the chat GPT example, the AI was the judge giving a grade.
00:04:02: Right, on its own.
00:04:03: In the stroke case, the AI was a safety net.
00:04:07: The AI didn't replace the radiologist.
00:04:10: It just caught the subtle signal that a stressed human might miss.
00:04:13: So the takeaway is AI works best as an alarm system, not the judge and jury.
00:04:19: But for that alarm system to work, the data has to be clean.
00:04:23: And Matthias Goyen raised a point that I think flies into the radar for a lot of tech teams.
00:04:27: He talked about sloppy language.
00:04:29: Right, the ethics of it.
00:04:30: Yes, it was a fascinating take.
00:04:31: We usually talk about data hygiene as a technical problem.
00:04:35: Goyen argues it's a moral one.
00:04:36: How so?
00:04:37: Well, AI models are trained on medical reports written by humans.
00:04:42: If doctors use ambiguous language or hedging or just sloppy descriptions in their notes, the AI learns to normalize that ambiguity.
00:04:49: It mirrors our bad habits back at
00:04:52: us.
00:04:52: And then it amplifies them.
00:04:54: Goyne's argument is that if we know these systems will be treating patients in the future, writing precise, clear medical notes today is an ethical obligation.
00:05:02: Wow.
00:05:03: That's a heavy realization.
00:05:04: It means good documentation is no longer just about billing or liability.
00:05:07: It's about the future intelligence of the entire system.
00:05:10: Exactly.
00:05:11: And speaking of precision, I want to touch on one specific win that came up.
00:05:14: Chris Scott or DeVito posted about page predict, which was just acquired by Tempus.
00:05:18: This felt like a very tangible solution to a very real problem.
00:05:22: It addresses the tissue economy in oncology.
00:05:25: This is something people outside of pathology rarely think about, but it's a huge bottleneck.
00:05:30: Explain that.
00:05:31: Sure.
00:05:32: When you do a biopsy, you have a tiny amount of tissue.
00:05:34: You have to slice it up to test for different biomarkers.
00:05:37: Often by the time you get to the advanced tests like next generation sequencing and GS, you've literally run out of tissue.
00:05:44: So the patient misses out on crucial genetic testing because the sample was used up on more basic tests.
00:05:49: That feels like a tragedy of logistics.
00:05:51: It is.
00:05:52: PagePredict uses standard H&E images.
00:05:55: The standard purple and pink slides you see in every lab to identify biomarkers digitally before you even cut the tissue.
00:06:02: So it's a screening tool to save the physical sample.
00:06:05: It helps clinicians decide the best sequence of tests to maximize the chance of an actionable result.
00:06:10: That is what execution looks like.
00:06:12: It's not a chatbot having a conversation.
00:06:15: It's a backend tool ensuring a cancer patient gets the right diagnosis because you didn't waste their biopsy.
00:06:20: That is a perfect transition to our second theme.
00:06:22: Let's move from the lab to the theater.
00:06:25: Digital surgery and robotics.
00:06:27: The narrative here is shifting fast.
00:06:29: For years the conversation was Look at this cool robot.
00:06:32: And
00:06:33: now.
00:06:33: Now the conversation is, how does this fit into my P&L and my workflow?
00:06:37: Right.
00:06:38: The buzzword in the posts from Rajit Kamal and James Boyle at Medtronic wasn't autonomy.
00:06:43: It was choice.
00:06:44: They were talking about the Hugo RIS system.
00:06:46: And notice the pitch.
00:06:48: They aren't saying this robot replaces everything.
00:06:51: They're pitching modularity.
00:06:53: Because hospitals are crowded and budgets are tight.
00:06:55: Exactly.
00:06:56: You can't fit a massive monolithic system in every OR.
00:07:00: The Hugo system focuses on flexibility letting surgeons use specific tools, like the Ligashire Maryland jaw or the Rubina visioning system, only when they need them for a procedure.
00:07:10: It sounds like they're admitting the one size fits all era of robotics is over.
00:07:14: It's about being part of an ecosystem, not dominating the room.
00:07:17: But does the modular approach actually deliver?
00:07:21: God will.
00:07:21: Dave shared some numbers from the expanded URL trial that caught my eye.
00:07:24: Those
00:07:25: numbers were incredibly robust.
00:07:27: They were.
00:07:30: They set a target for success at eighty-five percent.
00:07:32: They
00:07:32: hit ninety-eight point five
00:07:34: percent.
00:07:34: Ninety-eight point five percent.
00:07:36: That loops us right back to our reliability theme.
00:07:39: In the early days of robotics, there was always a fear of technical failure or latency.
00:07:44: Hitting nearly ninety-nine percent efficacy in a clinical trial builds the one thing you can't buy.
00:07:50: Surgeon trust.
00:07:51: Because if a surgeon doesn't trust the machine, it just becomes an expensive coat rack in the corner of the OR.
00:07:56: Exactly.
00:07:57: But even with that ninety-nine percent success rate, Gavin Setzen provided a bit of a reality check in the feed.
00:08:03: The adoption lag.
00:08:04: Right.
00:08:04: Setzen warned that we shouldn't expect robots in every local hospital next week.
00:08:08: Broad adoption is measured in years, not months.
00:08:11: Capital cycles, training.
00:08:13: It all takes time.
00:08:13: He emphasized we're in the age of augmentation, not autonomy.
00:08:17: And that connects to the Surgeon of the Future concept that Christian Masi mentioned.
00:08:20: Yeah.
00:08:20: The robot isn't doing the surgery for you while you drink a coffee.
00:08:23: Why from
00:08:23: it?
00:08:24: The surgeon is still accountable at the head of the table.
00:08:26: The robot gives you supervision and super-steady hands, but it doesn't make the decisions.
00:08:32: We are a long, long way from an autonomous droid performing a bypass.
00:08:38: Agreed.
00:08:39: Let's move out of the OR.
00:08:41: and look at the messy world of the consumer.
00:08:44: Theme three, wearables, remote monitoring, and prevention.
00:08:48: This section had some of the most human insights of the week.
00:08:53: There's a growing backlash against the idea that more data equals better health.
00:08:57: Bianca Carbone had the core of the week on this.
00:09:00: She said, AI can give you all the data in the world, but, and I love this, AI can't pick the dumbbells up for you.
00:09:06: It's a sharp critique of the whole quantified self movement.
00:09:09: We confuse information with support.
00:09:11: Right.
00:09:11: Our bonus point is that people don't fail to get healthy because they lack data.
00:09:14: They know they should sleep more.
00:09:16: They fail because they lack accountability, community, and, you know, the actual mechanisms for behavior change.
00:09:21: She basically said, don't spend thousands on health tech if you haven't mastered the basics.
00:09:25: It reminds me of what Mohita Hujo was observing out in Dubai.
00:09:28: The biohacking trend.
00:09:30: He described clinics flooded with people asking for peptides, cryotherapy, all these advanced supplements.
00:09:36: But these are the same people who are sleeping four hours a night and are totally stressed out.
00:09:40: Exactly.
00:09:41: They are trying to tech their way out of a bad lifestyle.
00:09:45: Ahuja's insight is simple but devastating.
00:09:49: Performance doesn't compound.
00:09:50: Foundations do.
00:09:52: You can't use tech to bypass the need for sleep and nutrition.
00:09:55: It's like putting rocket fuel in a car with a flat tire.
00:09:58: Precisely.
00:09:59: That said, for people who are managing a chronic condition, there is some hardware on the horizon that looks promising.
00:10:05: Wonsang called it the holy grail.
00:10:07: non-invasive glucose monitoring.
00:10:10: This has been the next big thing.
00:10:12: for what, a decade?
00:10:13: Are we actually getting closer?
00:10:15: Sang notes the race is getting very real for twenty twenty six.
00:10:18: The impact would be massive.
00:10:20: But she also points out the physics problem.
00:10:22: It's incredibly difficult to get an accurate reading through the skin using light or radio waves.
00:10:26: Too many variables.
00:10:27: Hydration, skin tone,
00:10:29: skin temperature.
00:10:30: All of it.
00:10:31: It goes back to the young chaw point about reliability.
00:10:34: If your watch says your blood sugar is low, but it's actually high because you're dehydrated, that's dangerous.
00:10:40: So the first company to crack that reliability barrier wins.
00:10:44: Wins the whole market.
00:10:45: But there is another barrier besides physics.
00:10:48: Cost.
00:10:49: Simon Neve highlighted this with a post about Leap Health.
00:10:52: Yeah, I saw that stat.
00:10:53: Fifty-four percent of people cite cost as the number one barrier to sleep tracking.
00:10:59: The big, brand smart rings are expensive.
00:11:01: Prohibitive for the average person.
00:11:03: So Leap Health launched a smart ring specifically focused on affordability.
00:11:07: It's a feature-focused device, doesn't do everything, but it tracks sleep at a price point normal people can actually afford.
00:11:13: It's democratizing the data, because if only the wealthy can afford to track their sleep, we aren't solving public health problems.
00:11:19: We're just optimizing the elites.
00:11:21: And speaking of democratization, we have to talk about the story from Atul Gupta.
00:11:25: This was my favorite example of execution this week because of where it happened.
00:11:29: The Wood Green Community Diagnostic Center.
00:11:31: Yes.
00:11:32: Imagine walking into a shopping mall in north London.
00:11:36: Between the stores and the food court, there is a Phillips blue seal MRI scanner.
00:11:41: a helium-free MRI in a mall.
00:11:43: That's what lets them do it, right?
00:11:44: It eliminates the need for the massive venting
00:11:47: pipes.
00:11:47: Exactly.
00:11:48: But look at the demographic data Gupta shared.
00:11:51: Seventy-five percent of the patients using that scanner are from disadvantaged households.
00:11:55: This is bringing the highest tech AI smart speed imaging directly to the people who usually face the longest wait times.
00:12:03: That is massive.
00:12:04: It challenges the assumption that high tech always means exclusive.
00:12:07: It proves that execution isn't just about the machine, it's about the logistics.
00:12:12: Putting the machine where the people are, that is true innovation.
00:12:15: Okay, so we've covered the AI reliability gap, the robotic workflow, the consumer reality check, but none of this works if the companies are dysfunctional.
00:12:23: Let's talk about our final theme, strategy, leadership, and these energy vampires.
00:12:28: This is a crucial warning for any health tech leader listening.
00:12:31: Meenal Shah introduced this concept of pilot churn as an energy vampire.
00:12:36: It's an evocative term.
00:12:37: What does she mean by it?
00:12:38: We've all seen it.
00:12:39: A hospital runs a pilot program for a new tool.
00:12:42: It runs for six months, shows some decent results, but then...
00:12:46: Nothing.
00:12:47: The budget isn't approved or a priority shift.
00:12:49: Right.
00:12:49: The pilot just dies.
00:12:50: Right.
00:12:51: Then, three months later, management introduces another pilot for a different tool.
00:12:56: And the clinicians just roll their eyes.
00:12:58: They're
00:12:58: burned out.
00:12:59: Exactly.
00:13:00: Shaw argues the cost isn't just the money, it's the erosion of clinician trust.
00:13:05: If you tell a doctor, here's a new tool that will change your life and then it disappears.
00:13:10: They are not going to believe you next time.
00:13:12: You've exhausted their capacity for change.
00:13:14: So when the real innovation finally arrives, the staff is too cynical to adopt it.
00:13:19: That is the vampire.
00:13:20: It sucks the energy out of the organization.
00:13:22: And to prevent that, you need better leadership.
00:13:25: Ian Robertson had a very strong take on who needs to be in charge.
00:13:28: He talked about tandem success coming from having clinicians in leadership, not just as advisors.
00:13:34: He used this brilliant analogy, he said, put a referral letter in front of an engineer.
00:13:38: and they see data transfer, a file moving from point A to point B.
00:13:42: But put that same letter in front of a doctor.
00:13:44: And they see their professional reputation.
00:13:47: That letter represents their judgment, their relationship with the specialist, their care for the patient.
00:13:52: That
00:13:52: nuance is everything.
00:13:53: If you build a tool that just transfers data... but ignores the professional tone of a referral, doctors won't use it.
00:14:00: Precisely.
00:14:00: You need that clinical DNA in the C-suite to understand those unspoken rules.
00:14:05: Otherwise, you're just building efficient products that no one actually wants.
00:14:08: But even with great leadership, you're operating in a minefield of regulation.
00:14:13: And Rudolph Ragnar sounded a massive alarm bell this week regarding SAMD software as a medical device.
00:14:19: Ragnar's analysis was shocking.
00:14:21: He claims SAMD regulation has essentially collapsed due to a total failure of enforcement.
00:14:26: Collapsed
00:14:27: is a strong word.
00:14:28: What's the evidence?
00:14:28: He analyzed ninety six thousand health apps on the app stores.
00:14:32: He found that ninety five percent of them, ninety five percent, had no verifiable CE mark or FDA authorization.
00:14:39: Wait, ninety five percent of health apps are unregulated, so the app store is basically the Wild West.
00:14:44: It's worse.
00:14:45: It's an unregulated pharmacy.
00:14:47: You have apps making diagnostic claims, recommending treatment, scoring risk, all without any regulatory oversight.
00:14:53: Wagner argues the app stores have effectively become distributors of illegal medical devices, and the regulators are silent.
00:15:00: That circles right back to the start of our conversation about trust.
00:15:03: If ninety-five percent of these tools are unverified, how can any patient possibly know what is safe?
00:15:08: They can't.
00:15:09: And that is a ticking time bomb for the whole industry.
00:15:12: However, we shouldn't take this as a sign to just give up on regulation.
00:15:15: No.
00:15:16: Sigurd Burs van Roigen offered a necessary counter perspective on the EU AI Act.
00:15:21: She was defending the new regulations.
00:15:23: In a
00:15:23: way, she noted that a lot of companies use the EU AI Act as an excuse to stop innovating.
00:15:28: They say, oh, the compliance is too hard.
00:15:30: We can't build here.
00:15:31: She argues that's lazy thinking.
00:15:34: Regulation should be viewed as a necessary baseline for safety.
00:15:37: It's not a roadblock.
00:15:38: It's the guardrails.
00:15:39: Exactly.
00:15:40: If we want AI to be standard care, it has to be regulated.
00:15:43: Innovation and safety aren't opposites.
00:15:45: They are partners.
00:15:46: If you can't build your product within safety regulations, you probably shouldn't be in health care.
00:15:51: So let's zoom out.
00:15:53: We've gone from an AI giving a reporter a fake heart condition to robots hitting ninety eight percent reliability to an MRI in a shopping mall in a regulatory crisis.
00:16:03: What is the through line here?
00:16:05: If I connect these docs, the message for CW four and five is clear.
00:16:09: The demo era is over.
00:16:11: It is no longer enough to show something cool.
00:16:14: You have to show something that is reliable that respects the clinicians workflow and that can survive scrutiny.
00:16:20: It feels like the industry is maturing.
00:16:21: We're moving from.
00:16:22: can we build it to?
00:16:24: should we build it and does it actually
00:16:25: work?
00:16:26: And is it sustainable?
00:16:27: That is a much harder question to answer.
00:16:29: It requires patience, clinical leadership, and a willingness to do the boring work of execution rather than just the fun work of invention.
00:16:36: And that leaves us with the provocative question for you, the listener, to chew on this week.
00:16:41: Indeed.
00:16:42: Look at your own organization.
00:16:44: Are you building for the demo, for the flash, the VC deck, the hype?
00:16:48: Or are you building for the product, the reliability, the workflow, the trust?
00:16:53: Are you feeding the energy vampire with endless pilots?
00:16:56: Or are you building the foundations for long-term execution?
00:16:59: That is the question.
00:17:01: The execution frontier is where the winners will be determined this year.
00:17:04: Absolutely.
00:17:05: If you enjoyed this episode, new episodes drop every two weeks.
00:17:07: Also, check out our other editions on ICT and tech insights, defense tech, cloud, digital products and services, artificial intelligence and sustainability and green ICT.
00:17:16: Thank you for listening.
00:17:17: Don't forget to subscribe for the next deep dive into the execution frontier.
00:17:21: See you then!
New comment