Best of LinkedIn: Artificial Intelligence CW 51 - 02
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways.
This edition explores the transition of artificial intelligence from speculative hype to strategic execution as the industry moves towards 2026. Experts emphasise that success now depends on agentic AI, where autonomous systems perform complex workflows, requiring leaders to shift from simple prompting to deep system configuration. A significant portion of the text focuses on the EU AI Act, highlighting its role as a global benchmark for governance, ethics, and compliance in high-stakes environments. Contributors argue that while AI can drastically enhance productivity in sectors like HR and law, it requires human-in-the-loop oversight to mitigate risks like algorithmic bias and data drift. The collection also offers practical implementation roadmaps and architectural frameworks, such as the Model Context Protocol, to standardise how AI interacts with existing business tools. Ultimately, the sources suggest that the competitive divide will be defined by AI literacy and the ability to integrate trustworthy, sovereign intelligence into the core of organisational operations.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This deep dive is provided by Thomas Allgeier and Frennis, based on the most relevant LinkedIn posts about artificial intelligence from calendar weeks fifty-one and oh two.
00:00:09: Frennis supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts and competitor strategies.
00:00:18: So product teams and strategy leaders don't just react, but shape the future of AI.
00:00:24: Welcome back to the deep dive.
00:00:26: If you were tracking the industry conversation over the recent calendar weeks, it's pretty clear that AI has finally shifted.
00:00:32: We're out of the flashy pilot stage.
00:00:34: Yeah, the focus is different now.
00:00:36: It's no longer on if we should use AI, but purely on execution, governance, and really how to fundamentally restructure the business for this new, agentic era.
00:00:45: That's absolutely the core takeaway.
00:00:47: The whole conversation has moved from hype to the let's say hard reality of operating models.
00:00:52: Companies are realizing that unlocking value requires serious strategic governance and critically a complete overhaul of their tech stack to manage these autonomous systems.
00:01:01: Our deep dive into the most relevant posts from weeks fifty one and oh two absolutely confirmed this.
00:01:07: We found three massive themes that are defining the path forward right now.
00:01:11: The first one is purely technical.
00:01:13: It's this huge paradigm shift from simple user prompting to autonomous, agentic AI, and coded automation.
00:01:24: The systems are starting to design the processes themselves.
00:01:26: And that, of course, leads directly into the second theme.
00:01:30: The strategic evolution of the enterprise AI operating model, you know, how do you actually monetize this and unlock real commercial value at scale?
00:01:38: And finally, we saw a heavy, heavy focus on the regulatory tipping point, specifically governance, compliance, and the accelerating reality of the EU AI Act becoming the global standard for managing risk.
00:01:49: Okay,
00:01:49: let's unpack that technical shift first, because it feels fundamental to everything else.
00:01:53: It really seems like the era of prompt engineering that art of writing clever instructions is ending.
00:01:57: It's ending fast.
00:01:58: It's giving way to true.
00:02:01: where the system is just less dependent on human micromanagement.
00:02:04: So what does that leap in capability actually look like?
00:02:07: Well, the sources really crystallized the change.
00:02:10: Nandan Malikara shared a fascinating example.
00:02:13: It involved a challenge that traditional RPA, you know, robotic process automation, was designed to solve.
00:02:20: The older stuff.
00:02:21: Right, the older workflow automation.
00:02:23: And previously, getting an AI to solve this required perfect, delicate, prompt finagling.
00:02:30: And that took about fourteen milliseconds,
00:02:32: which sounds incredibly fast already.
00:02:33: But what happened with the new tech?
00:02:35: Nanda noted that using cloud code, the same challenge was done in just nine milliseconds.
00:02:41: Look, the time difference, the five milliseconds, that isn't the real story here.
00:02:45: So what is
00:02:46: the key difference?
00:02:46: was the approach?
00:02:47: zero prompting was required.
00:02:49: The AI, it's it made a plan, it tested it, found the bugs, fixed them and just iterated by itself until it succeeded.
00:02:56: Wow.
00:02:56: So we're watching the AI act more like an intelligent junior programmer who designs the solution, not just a bot that follows simple instructions.
00:03:04: Exactly.
00:03:04: And that changes everything about how you design automation.
00:03:06: So what does that mean in practice?
00:03:08: It means, and Nandan Mulakara argues this, that automation strategies have to split.
00:03:14: Legacy systems, think old mainframes, they can stay on traditional RPA, but everything involving modern API and web automations needs to move toward these coded, agentic solutions, and they need to have programmatic guardrails built in.
00:03:28: This isn't just an update, it requires a whole new technical architecture.
00:03:32: That replatforming sounds like a tremendous amount of work.
00:03:35: And if these agents are autonomous, how do companies stop them from just running wild across their systems?
00:03:41: That's where the discipline comes in.
00:03:42: I mean, Elizabeth Kumpan drove this point home really hard.
00:03:45: Autonomy without a defined architecture is just chaos.
00:03:48: You cannot, ethically or commercially, just unleash a smart agent into the enterprise without a clear framework.
00:03:55: So what's the most critical part of that framework?
00:03:57: I saw some of the architecture diagrams and they were intense.
00:04:00: They are dense, but you can simplify the function into let's say two key control points.
00:04:04: At the very top, you need what's called the intent layer.
00:04:07: That's the high level, why?
00:04:09: defining the system's policies and constraints.
00:04:11: Okay, the big picture rules.
00:04:13: Exactly.
00:04:13: And then at the bottom, you have the action layer, which is the how the actual execution the API calls.
00:04:19: But the critical point is that all of this is supported by a strict governance and control plane.
00:04:25: Without that scaffolding, you're basically running unmanaged code in production.
00:04:29: Okay, so that's the internal structure.
00:04:30: Let's talk about external connections.
00:04:32: I was surprised how often the sources flagged agent connectivity as a, you know, a critical failure point.
00:04:39: It's a huge bottleneck.
00:04:41: And Steve Neury's idea really caught my eye here.
00:04:43: He called the model context protocol the USB for AI systems.
00:04:48: It's a brilliant analogy.
00:04:49: It's so memorable because that is exactly the problem it solves.
00:04:53: How so?
00:04:53: Steve Norrie explained that right now, every time you want an agent to use a new internal tool or API, you have to build a custom brittle integration.
00:05:03: MCP aims to standardize all of that.
00:05:05: So the tools sort of announce what they can do?
00:05:08: Precisely.
00:05:09: They declare their capabilities.
00:05:11: I can query the HR database.
00:05:13: I can process a refund.
00:05:15: And the central AI model requests what it needs using that standard protocol.
00:05:19: Yeah.
00:05:19: This makes large-scale deployment actually scalable and repeatable.
00:05:23: Instead of forcing every agent to rely on a one-off fragile connector.
00:05:27: Exactly.
00:05:28: So if the tech is moving toward building these robust, autonomous agents, the skill set for developers and strategists must also- Oh,
00:05:35: completely.
00:05:36: It has to change, right?
00:05:37: We're definitely moving beyond just writing clever prompts.
00:05:40: Absolutely.
00:05:41: And Bree Shorpandei made a very strong argument that context engineering is now the scalable approach.
00:05:47: It's completely passing the value of just prompt engineering.
00:05:50: Okay, break that down.
00:05:51: Prompt engineering was focused on the input.
00:05:53: Right.
00:05:54: How to write clever instructions.
00:05:55: Context engineering is focused on the environment.
00:05:58: The environment.
00:05:59: It means defining the structured operating environment for the agent.
00:06:02: Its specific goals, its boundaries, its decision rules.
00:06:06: It's creating the complete, well-defined world you are allowed to think inside.
00:06:11: So it's about building the guardrails for intelligent behavior.
00:06:14: Exactly, which is infinitely more complex and valuable than just crafting a better query.
00:06:20: That technical shift to agentic AI leads us directly to our second theme.
00:06:25: How do organizations restructure their operating model to actually capitalize on these powerful new tools?
00:06:31: Because having the tools is not the same as having the value.
00:06:34: That's the strategic challenge right now.
00:06:36: I mean, but Pedro Martins pointed out that a huge number of AI initiatives fail when they hit production.
00:06:41: And not because the models are bad?
00:06:43: No, not at all.
00:06:44: It's because of execution, orchestration, and just poor workflow design.
00:06:48: Drick Munir really elaborated on that.
00:06:49: He stressed that workflows are the unseen eighty percent of any successful AI agent.
00:06:55: He warned that deploying agents without deeply understanding the tribal knowledge, the informal handoffs, the email chains, he called it building sand castles.
00:07:04: That's a perfect metaphor.
00:07:05: The moment a workflow changes, your agent breaks.
00:07:08: So if that execution friction is the main obstacle, where should a large organization realistically start?
00:07:14: That's the key question.
00:07:16: And Chompay Dramada offered some very pragmatic advice.
00:07:20: He essentially argued that boring wins.
00:07:22: Boring wins.
00:07:23: I like that.
00:07:24: He suggests the very first AI workers you build should focus on high volume, high pain, but low drama customer tasks.
00:07:33: Things like processing standard refunds, issuing simple billing credits.
00:07:38: Exactly.
00:07:39: Or just intake for claims.
00:07:40: These are the areas where you establish clear, measurable value and Most importantly, you build internal trust with human employees before you try to scale to those big high-risk transformative projects.
00:07:53: But isn't there a danger there that focusing only on the boring wins stops companies from pursuing those truly transformative high-reward applications that could fundamentally change the business?
00:08:03: That is the tension.
00:08:04: And Alexander Finger addressed this head-on.
00:08:06: He argued that companies don't actually need an AI strategy as their starting point.
00:08:10: What do
00:08:10: they need and set?
00:08:11: They need to redefine the problem they're trying to solve.
00:08:14: He says you need an automation strategy first.
00:08:16: Start with the outcome, not the tool.
00:08:17: Precisely.
00:08:19: AI is simply the most effective, most modern tool to achieve core business objectives, like maximizing human effectiveness or increasing speed and accuracy.
00:08:29: If you start by optimizing AI use cases, you risk improving the parts instead of questioning the whole process.
00:08:36: That feels like a real return to first principles.
00:08:38: Now connected to that structural thinking, there's this major conversation happening around data security and control.
00:08:44: I saw Ajit Kumar Panda highlighting the rise of what's being called sovereign AI.
00:08:48: Yeah, this is a big one.
00:08:50: It's the realization that renting intelligence, you know, sending your sensitive proprietary data to public third party APIs for core business functions creates immense strategic and security risk.
00:09:01: It's like outsourcing your brain.
00:09:02: Exactly.
00:09:03: It's the opposite of that.
00:09:04: Ajit Kumar Panda argues that while renting intelligence is fine for generic tasks, like summarizing a public report, you have to build sovereignty for your core business functions.
00:09:13: Full control over your infrastructure, your data, your models, your talent.
00:09:18: Your proprietary data is your competitive identity.
00:09:21: You have to keep that intelligence under strict control.
00:09:24: It's non-negotiable.
00:09:26: And ultimately, all of these shifts, from pump to agent, from tech strategy to automation strategy, it all comes back to the readiness of the people inside the organization.
00:09:35: That's
00:09:36: right.
00:09:36: Gabriel Millian outlined what he called the AI skill ladder.
00:09:40: Professionals have to move from foundational literacy, just understanding what AI is, to operational mastery, knowing how to deploy it, and finally, to strategic vision.
00:09:49: If leaders can't speak this language, they lose relevance fast.
00:09:52: They do.
00:09:53: And Joshua Miller brought the sharpest point home about leadership attitude.
00:09:57: He warns that the real subtle risk of Gen AI isn't the technology itself.
00:10:01: What is it then?
00:10:02: It's how quickly leaders become comfortable enough to stop questioning the output.
00:10:06: As he put it, Gen AI won't kill critical thinking, but comfortable leaders will.
00:10:12: The tool is powerful, but you have to intentionally interrogate what it gives you, challenge its logic, and assume it's wrong until you can prove it's right.
00:10:21: That sets the perfect stage for our final theme, the regulatory tipping point.
00:10:26: For years, governance was theoretical, a nice to have.
00:10:30: But, twenty-twenty-six is rapidly becoming the year that regulation becomes enforceable.
00:10:35: Anna Carolina Alvarez Gill pointed out that the EU AI Act is already functioning as a global reference point.
00:10:41: And this cannot be treated as just a legal exercise you delegate to the compliance department.
00:10:45: Patrizia Bertini was very clear on this.
00:10:47: The EU AI Act is fundamentally a product design problem.
00:10:51: You can't just bolter on at the end.
00:10:52: No, you can't build a system and then try to retrofit responsible AI principles later.
00:10:57: They have to be baked into the product design from the very, very beginning.
00:11:00: And that necessity for continuous oversight is driven by the nature of these systems.
00:11:05: Michael Scharcement noted that classic software, it either works or it crashes with a clear error message.
00:11:10: But AI doesn't crash.
00:11:12: No, it drifts silently.
00:11:14: And a silent drift is terrifying, especially for high risk systems operating autonomously.
00:11:20: The model slowly starts to decay or its environment changes and you won't get a clear error signal.
00:11:26: So you have to constantly prove that it's still doing what you think it's doing.
00:11:30: You have to.
00:11:31: You have to continuously monitor and document the system to prove it still operates according to its initial design.
00:11:37: If you're using AI in production, you must have the processes to back up the claim that the model hasn't become silently wrong.
00:11:44: And the cost of that silent drift or any failure is just astronomical.
00:11:48: Daniel L. gave a remarkably stark example of a recent fine.
00:11:52: I saw this one.
00:11:53: A recruiting platform was hit with a fifteen million dollar fine for a biased AI hiring tool.
00:11:58: Right, that fine made the headlines, but Daniel L. highlighted the real cost.
00:12:02: The remediation.
00:12:03: The company was then required to spend two hundred million in remediation just to re-review three hundred and forty thousand affected applications.
00:12:11: That number just illustrates the exponential financial danger of failing on governance versus the cost of simply investing in it upfront.
00:12:18: Governance failure is vastly more expensive.
00:12:20: Given that enforcement reality, multinationals are facing a nightmare because the global regulatory landscape is so fragmented.
00:12:27: Marcus Shuler and Alexander Engelheim presented a really compelling contrast.
00:12:31: It's the difference between unity and chaos, really.
00:12:34: The EU-AI Act provides one unified legal framework for twenty-seven countries.
00:12:39: And in the US.
00:12:40: Compare that to the US, where in twenty-twenty-five alone, thirty-eight states passed a hundred and fifty-nine uncoordinated AI laws.
00:12:49: For global company, the narrative of the unregulated US is misleading.
00:12:54: It's actually a confusing, complex, and conflicting patchwork of
00:12:57: rules.
00:12:57: It's far less clear than the unified European approach.
00:13:00: Are less clear.
00:13:01: But doesn't that fragmentation in the US allow for faster innovation?
00:13:04: If Europe is unified but slow, isn't that a commercial risk?
00:13:07: That's the classic tension.
00:13:08: But the sources argue that strong governance ultimately drives speed.
00:13:12: It doesn't stifle it.
00:13:14: Mariam Machuri's research confirms this.
00:13:16: Leaders attribute a massive twenty-seven percent of AI efficiency gains directly to having strong governance in place.
00:13:21: Wow,
00:13:21: twenty-seven percent.
00:13:23: When you have trust, accountability, and clear risk boundaries, your organization feels empowered to move faster and with more confidence.
00:13:30: Governance enables, it doesn't just constrain.
00:13:33: So let's tie this all back to you as you navigate this new egentic era.
00:13:37: We've established that autonomous systems are acting rapidly, executing complex tasks in milliseconds, often outside human oversight.
00:13:44: And Daniel L. warned that the same tools that boost productivity are also being used for state-sponsored cyber espionage, operating at machine speed against defenses still designed for human speed.
00:13:55: This raises the crucial question for the future of your enterprise.
00:13:58: If the risks and autonomous actions are occurring in milliseconds, is your governance strategy?
00:14:03: which relies on human approval, human oversight, and human accountability, is it robust enough to manage machine speed risks?
00:14:09: Or are you designing human speed processes for an entirely new speed of risk?
00:14:13: If you enjoyed this deep dive, new deep dives drop every two weeks.
00:14:18: Also check out our other editions on ICT and tech, digital products and services, cloud sustainability and green ICT, defense tech and health tech.
00:14:25: Thanks for tuning in and make sure to subscribe so you don't miss the next deep dive.
New comment