Best of LinkedIn: Artificial Intelligence CW 47/ 48

Show notes

We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways.

This edition collectively illustrate that Artificial Intelligence is moving from an experimental phase to a highly regulated and essential core infrastructure for global business operations. A major theme is the worldwide impact of the EU AI Act, which mandates stringent governance, accountability, and evidence-based compliance for high-risk systems, with significant financial penalties serving as a global benchmark for ethical technology deployment. Technically, the documents track the escalating competition between major developers like OpenAI and Google, while emphasizing the critical shift toward sophisticated architectures such as Agentic AI and World Models that enable goal-driven system orchestration. Enterprise success is shown to hinge less on the models themselves and more on addressing organizational barriers, necessitating a focus on AI literacy, strong data governance, and fundamental workflow redesigns to scale solutions beyond initial pilots. These discussions confirm that AI is not merely a tool but a new operating system, compelling organizations to adapt to machine speed and proactively architect their governance structures for trust, rather than just waiting for compliance requirements.

This podcast was created via Google NotebookLM.

Show transcript

00:00:00: This deep dive is provided by Thomas Allgaier and Freenis, based on the most relevant LinkedIn post about artificial intelligence in calendar weeks forty-seven and forty-eight.

00:00:10: Freenis supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts, and competitor strategies, so product teams, strategy leaders don't just react, but shape the future of AI.

00:00:25: Well, welcome to the deep dive for the next few minutes.

00:00:28: We're cutting through the noise to give you the most critical actionable insights on artificial intelligence from the last two weeks of professional discussion on LinkedIn.

00:00:37: And what really stands out in the sources this fortnight is that AI has, well, it's decisively left the lab.

00:00:43: The conversation has completely shifted.

00:00:46: We're no longer talking about if AI is going to transform business.

00:00:49: It's now all about how to deploy it, how to govern it, and how to measure it as, you know, core enterprise-grade infrastructure.

00:00:54: It's not a side project anymore.

00:00:56: It's

00:00:56: the new operating system.

00:00:57: And that structural change means the risk profile and the skills you need have both just skyrocketed.

00:01:04: So our mission today is to unpack this foundational shift across four key areas.

00:01:10: Right.

00:01:10: We'll start with the arrival of formal governance and regulation, then get into the challenges of enterprise strategy, the rapid evolution in product innovation, and finally, the practical lessons we saw confirmed at the recent World AI Expo.

00:01:23: Sounds good.

00:01:24: Let's jump in.

00:01:25: So if AI is infrastructure, we have to start with the rules.

00:01:28: And those rules, this Fortnite, were dominated by one urgent topic, governance.

00:01:34: It seems like the era of, you know, move fast and break things is officially over.

00:01:37: It's been replaced by move cautiously and document everything.

00:01:40: Absolutely.

00:01:41: Governance isn't an optional extra anymore.

00:01:43: It's the non-negotiable baseline.

00:01:45: And the driver for this is the EU AI Act.

00:01:48: And

00:01:48: crucially, this isn't just a regional thing.

00:01:50: Bianca Grasias was explicit.

00:01:52: She pointed out that the Act regulates whoever touches Europe.

00:01:55: That's the key part.

00:01:56: You don't need to be based there.

00:01:58: If your AI systems impact European users, you are in scope.

00:02:02: It's that global gravitational pull, right?

00:02:04: Mark Beesey noted, it's already being treated as the global blueprint for CIOs.

00:02:09: Yeah, much the way.

00:02:10: GDPR established the worldwide gold standard for privacy almost overnight.

00:02:15: The standard for accountability is now being set, whether your local government has acted yet or not.

00:02:20: And the thing that really underscores the severity here is the financial risk.

00:02:24: Bjorn London provided a pretty sobering comparison on the fines.

00:02:28: He did.

00:02:29: We're all used to GDPR's penalties, but the AI Act just takes it up a level.

00:02:33: What are we talking about?

00:02:33: The maximum fine is thirty five million euros, or a full... seven percent of global turnover.

00:02:40: Seven percent?

00:02:41: Wow.

00:02:41: And just for context, that figure is actually higher than the maximum fines under GDPR and NIS-II combined.

00:02:49: I mean, think about what seven percent of global revenue means for any major tech company.

00:02:53: That's existential risk for some applications.

00:02:55: It is.

00:02:55: It's not just a slap on the wrist.

00:02:57: So that makes us a critical, immediate business priority.

00:03:00: What does compliance actually look like on the ground?

00:03:03: Well, I'd indeed am was very clear.

00:03:05: For high risk AI, You need concrete evidence, not just good intentions.

00:03:10: Not just promises.

00:03:11: Not at all.

00:03:12: You need audit trails, transparent documentation, continuous monitoring.

00:03:17: It requires a lot of organizational discipline.

00:03:20: You have to map every single AI system to its risk tier.

00:03:24: So if you're waiting to see how it unfolds.

00:03:26: You're already late.

00:03:27: You will be scrambling to produce the documented proof when enforcement arrives.

00:03:31: So this is an engineering task baked in from the start.

00:03:34: not just a lethal sign-off.

00:03:35: Right.

00:03:36: But the risk discussion we saw also broadened beyond just technical compliance.

00:03:41: I noticed that.

00:03:42: It's getting to some deeper, more human areas.

00:03:44: It is.

00:03:45: Bruna Schwartz-Raiser really unsettling concern about mental privacy.

00:03:49: As AI gets closer to predicting our thoughts and emotions, you know, the line between consumer analysis and direct cognitive surveillance gets very blurry.

00:03:56: That's a fundamental challenge to autonomy.

00:03:58: How immediate is that risk?

00:04:00: The ability to predict user intent is already here, but the cognitive mo- modeling is accelerating fast.

00:04:06: And Wanda R emphasized that regardless of the specific ethical angle, the fundamental danger is simply the speed.

00:04:13: The speed of the decisions.

00:04:15: Exactly.

00:04:16: AI operates at machine velocity, far outpacing human oversight.

00:04:21: The real operational failure isn't a bad decision.

00:04:24: It's the absence of governance in the first place.

00:04:26: And that's where frameworks become so essential, right, to translate the abstract into something you can actually do.

00:04:32: Precisely.

00:04:33: That's why sources highlighted tools like the Protect framework, which Oliver Patel mentioned.

00:04:37: It takes those big abstract AI risks and turns them into concrete control checklists for regulated sectors like finance or health.

00:04:45: It's the move from concept to checklist.

00:04:47: Okay, let's pivot from rules to reality.

00:04:49: Let's do it.

00:04:50: Despite all the interest in AI, we're still seeing this massive adoption friction.

00:04:54: A McKinsey report noted that sixty-seven percent of firms are stuck in pilot purgatory.

00:04:59: Sixty-seven percent.

00:05:00: That is a huge number.

00:05:02: I mean, the technology is ready, the models are powerful, so the bottleneck has to be internal.

00:05:07: Why are so many companies stuck?

00:05:09: Harish Kumar delivered what he called the uncomfortable truth.

00:05:13: He argues that genii success is not a model problem.

00:05:16: It's a context problem.

00:05:17: By context problem.

00:05:18: Yeah.

00:05:19: The enterprises that scaled successfully often used the exact same models as the ones that failed.

00:05:24: The difference.

00:05:25: If you feed an AI agent bad operational context, it plans badly.

00:05:30: And he says most enterprises are operating with, and I'm quoting here, garbage context.

00:05:35: Garbage context.

00:05:36: I see the analogy, but wait, isn't that what?

00:05:39: retrieval augmented generation or RA is supposed to solve?

00:05:42: That's

00:05:42: a great question.

00:05:43: And yes, RA is an architectural necessity, but it can only organize the context you give it.

00:05:48: If the underlying data is siloed, inconsistent, or just reflects a messy manual human process, the AI agent simply scales that organizational dysfunction.

00:05:58: It's an organizational problem that no amount of prompt engineering can fix.

00:06:01: Which leads directly to the leadership problem.

00:06:04: Thomas Zimmerer noted that AI projects fail in the boardroom, not the server

00:06:08: room.

00:06:09: Exactly.

00:06:09: Many leaders are still operating with this linear pre-AI mindset.

00:06:14: They're focused on incremental, you know, five percent improvements.

00:06:18: But

00:06:18: AI demands a different mindset.

00:06:20: It demands a non-linear by-five transformation mindset.

00:06:24: And Dagmar Eisenbach reinforced this perfectly.

00:06:27: She said, the most important AI competency for leaders today isn't deep technical expertise, it's courage.

00:06:33: Courage to challenge sacred cows and initiate real change.

00:06:36: Courage

00:06:37: to face the mess.

00:06:38: And as Jurislaw of Pensjoha emphasized, you can't just add an agent to a broken process.

00:06:44: And Glenn McCracken drove this home with a really powerful idea.

00:06:47: Yeah.

00:06:47: Transformation is blocked by a shocking lack of organizational self-awareness.

00:06:51: They don't know how their own workflows actually run.

00:06:53: Exactly.

00:06:54: Outside of the formal org chart, they just don't know.

00:06:56: The moment you introduce an AI agent, that dysfunction is brutally exposed.

00:07:00: So the real value of a pilot isn't to test the mo-

00:07:03: It's to hold a mirror up to the organization.

00:07:05: Interesting.

00:07:05: So if the organization is the bottleneck, the workforce needs a fundamental upgrade.

00:07:10: Absolutely.

00:07:11: Connor Grennan highlighted the fluency first mandate.

00:07:14: Adoption stalls when organizations just roll out tools without investing in user skills.

00:07:19: He referenced an open AI paper that called education the ignition that sparks use case creation.

00:07:25: So you're moving education from a compliance checkbox to a strategic innovation driver.

00:07:30: That's the shift.

00:07:31: And we saw frameworks for this.

00:07:33: Brescia for Pandy's Gen AI mastery staircase maps the progression.

00:07:38: Basic prompt engineering is table stakes now.

00:07:40: The value moves to agentic AI and building domain specific applications.

00:07:45: And what about managing the uncertainty of these new operations?

00:07:48: That's where the GLOE framework from Dilip Sant comes in.

00:07:51: It stands for goals, limitations, opportunities, and evaluation.

00:07:55: It helps teams turn the inherent uncertainties of AI into measurable business value, it creates a structured path.

00:08:02: Okay, let's transition to the technology itself.

00:08:05: AI is being called the new operating system.

00:08:08: That's what Layla Shaban and Pradeep Sanyal said, and they're right.

00:08:12: And this realization has dramatically intensified the platform rivalry.

00:08:16: For sure.

00:08:17: We've been watching the open AI and Google battle unfold in real time.

00:08:21: Brian Solis noted that Google's Gemini III Pro is already actively eroding open AI's lead.

00:08:27: The whole thing is very volatile.

00:08:29: And Google is playing a different game.

00:08:30: Louisa Jurowski argued their dominance comes from sheer reach, their distribution.

00:08:35: How so?

00:08:36: They are aggressively pushing smart features into billions of existing user touchpoints.

00:08:41: Search, Android, Gmail.

00:08:43: The best model doesn't matter if it doesn't reach users where they already are.

00:08:46: It's about embedded capability, not just raw intelligence.

00:08:50: OK, so let's drill down into the architecture of powering this.

00:08:53: Egentek AI seems to be the critical layer.

00:08:55: Agents are the orchestration layer, as Anthony Alcaraz put it, and what's fascinating is they aren't replacing traditional machine learning, they're integrating with it.

00:09:04: Okay, explain that.

00:09:04: So agents excel at linguistic prediction understanding a user's intent, but classical ML like regression is still way better for statistical prediction on structured business data.

00:09:15: So the agent is the conductor, but the classic models are still the best musicians for certain jobs.

00:09:21: Perfect analogy.

00:09:22: The agent interprets a complex query like analyze Q for sales dips.

00:09:27: It breaks that down into statistical tasks, maybe uses a cleaning primitive to prep the data, then hands it off to a classical model for the actual prediction.

00:09:35: And then

00:09:35: it translates the result back into plain English for the user.

00:09:38: Exactly.

00:09:39: It's a hybrid architecture.

00:09:41: Speaking of architecture, Eduardo Ordox emphasized one component that dictates success or failure for these agents.

00:09:48: Memory.

00:09:48: He says over ninety percent of an agent's performance hinges on effective memory management.

00:09:54: Ninety percent?

00:09:55: Yeah.

00:09:55: If you treat memory as just flat storage, you cap performance.

00:09:59: You have to treat memory as architecture, distinguished between working memory, episodic memory, semantic memory.

00:10:06: Why does that distinction matter so much?

00:10:07: Because it allows the agent to recall context efficiently.

00:10:10: ORDAX cited examples where structured memory systems boosted accuracy by as much as twenty-six percent compared to flat storage.

00:10:18: Twenty-six percent?

00:10:19: That's a huge game.

00:10:20: It's the difference between an unreliable tool and a trusted assistant.

00:10:25: And we are seeing these agents in the real world now.

00:10:28: Absolutely.

00:10:29: Shivkumar Iyer successfully tested OpenAI's agent builder to support Oracle ERP functions.

00:10:35: He said it acted like a reliable junior consultant.

00:10:38: And Jan Gilg highlighted where the biggest unlock comes from, which connects back to our earlier point.

00:10:44: Connecting data silos.

00:10:45: Exactly.

00:10:46: AI acts as a force multiplier when it can link finance, supply chain, and HR.

00:10:52: Bridging those functional silos delivers more value than just squeezing more accuracy from the core model.

00:10:57: And finally, we saw a strategic counter-movement against just massive foundation models.

00:11:02: Stephanie Gradwell pointed out that small-language models, SLMs, offer something critical.

00:11:07: Strategic autonomy.

00:11:08: They let enterprises regain control over the data and their roadmaps.

00:11:12: It shifts power back.

00:11:13: Let's pull all this theory down to earth now with insights from the World AI Expo in Dubai.

00:11:19: Arjun Dhananjayan said the atmosphere was clear.

00:11:22: AI is no longer a tool.

00:11:23: It's the new operating system.

00:11:25: And the focus was intensely practical.

00:11:28: Julia Hell noted discussions were all about tangible implementation, governance, and privacy-aware architectures.

00:11:34: The hype cycle is fading.

00:11:36: Industrial readiness is the new focus.

00:11:38: That's

00:11:38: it.

00:11:39: Patrick Schaefler shared three key lessons for implementation that really cut through the noise.

00:11:44: First, AI only matters if it drives measurable business outcomes.

00:11:49: If you can't link it to revenue or cost, it's just a hobby.

00:11:52: Second, and this is crucial.

00:11:54: Governance and people issues slow you down far more than the tech.

00:11:58: Which reinforces our earlier point about the organizational bottleneck.

00:12:02: It does.

00:12:02: And his third point was that you don't need perfect data to begin.

00:12:06: Those who experiment now are the ones who stay ahead.

00:12:09: And once you take action, how do you measure success?

00:12:13: Greg Kokio highlighted how leading companies like Google and GitHub treat AI like a performance system.

00:12:18: Not just an application.

00:12:19: Right.

00:12:20: They track metrics across speed, quality, and developer experience.

00:12:24: It has to be instrumented and managed like any other core infrastructure.

00:12:28: We need a tangible example to bring this home.

00:12:31: Diane Hewan Eldridge shared a powerful use case with multimodal AI in industrial settings.

00:12:36: This

00:12:37: is fascinating.

00:12:38: Imagine a worker on a remote oil rig or at a power substation.

00:12:43: They point a camera at some complex machinery.

00:12:45: Okay.

00:12:46: The multimodal AI instantly sees the engine, identifies the components, figures out what the worker is doing, and then delivers real-time, step-by-step visual guidance for the repair.

00:12:57: Wow.

00:12:57: So it's not about reading a manual, it's dynamic visual context based on what the AI is literally seeing in front of you.

00:13:04: That's a huge leap for safety and efficiency.

00:13:07: A huge leap.

00:13:08: So as we synthesize all this insight from the last two weeks, the message is singular and.

00:13:13: I think urgent.

00:13:14: The window for AI experimentation is closing.

00:13:17: It's time to get serious.

00:13:18: It

00:13:18: is.

00:13:19: We've covered the expensive, non-negotiable arrival of governance, the internal organizational bottlenecks, and the architectural necessity of agentic AI.

00:13:27: The common thread is that AI is now core infrastructure, but its value is unlocked by the organization's readiness and intentional design.

00:13:34: Which brings us back to that provocative thought from Glenn McCracken.

00:13:39: If AI is, in fact, just holding a mirror up to your existing organizational dysfunction, then the real strategic question for you now is not which model to implement next.

00:13:51: It's how quickly you can build the internal self-awareness, fix the garbage context, and find the leadership courage needed to redesign your operations so AI can actually work.

00:14:01: If you enjoyed this deep dive, new additions drop every two weeks.

00:14:04: Also check out our other additions on ICT and tech, digital products and services, cloud, sustainability and green ICT, defense tech and health tech.

00:14:11: Thank you for joining us and we look forward to having you subscribe and join us for the next deep dive.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.