Best of LinkedIn: Artificial Intelligence CW 49/ 50
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways.
This edition provides a comprehensive overview of the current landscape and future trajectory of Artificial Intelligence (AI), with a strong emphasis on governance, adoption, and Agentic AI. Several authors highlight the critical need for robust AI governance and compliance, particularly concerning the EU AI Act, which is viewed as a major driver for implementing structured risk management frameworks like ISO 42001. Furthermore, the text addresses challenges in AI adoption and execution, noting that while enthusiasm is high, actual, measurable Return on Investment (ROI) and successful scaling are often hindered by poor implementation, a lack of leadership capability, and the threat of Shadow AI to intellectual property. Finally, the emergence of Agentic AI is discussed as a pivotal shift, moving AI from simple tools to autonomous teammates that enhance productivity across sectors, while simultaneously introducing new security risks like cross-user prompt injection and ethical concerns regarding the future of employment and societal reliance on algorithms.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This deep dive is provided by Thomas Allgaier and Frennus, based on the most relevant LinkedIn posts about artificial intelligence in calendar weeks, forty nine and fifty.
00:00:09: Frennus supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts and competitor strategies.
00:00:19: So product teams and strategy leaders don't just react, but shape the future of AI.
00:00:25: Welcome to the deep dive.
00:00:28: So this week, we're digging into a huge pile of insights from LinkedIn, really looking at the top artificial intelligence trends that we're defining the end of the year.
00:00:37: Yeah, and if you're expecting another chat about the latest model, writing better poetry, it's not what's happening.
00:00:43: The whole conversation is shifted.
00:00:44: It's no longer about the novelty of it all.
00:00:46: It's now about control.
00:00:47: It's about execution and building actual systems.
00:00:50: We're seeing leaders, engineers, regulators, they're all grappling with how to govern, orchestrate, and you know, critically, how to secure AI when it leaves the lab.
00:01:00: So it's moved from what if to how to.
00:01:02: Exactly.
00:01:03: The focus is one hundred percent on measurable value now.
00:01:06: OK, so let's unpack all that.
00:01:07: Our mission here is to pull out the most important nuggets for you if you're a professional in the ICT and tech industry.
00:01:13: And we're going to cover everything from why these huge pilots are failing to which global regulations you absolutely cannot ignore.
00:01:21: And that failure point, I mean, that's where we have to start.
00:01:23: The data is pretty sobering.
00:01:26: It really is.
00:01:27: Let's get into it.
00:01:28: The hard truth about adoption.
00:01:31: We're calling this first theme AI strategy and the execution gap.
00:01:36: The big consensus we're seeing is that AI underperformance, it isn't really a model quality problem.
00:01:43: It's a failure of governance and just, well, weak execution.
00:01:47: It's even more than that.
00:01:48: It's actually a retreat.
00:01:49: John Nordmark shared this truly surprising data point.
00:01:52: I
00:01:52: saw that one.
00:01:53: AI adoption in big companies is actually declining, not rising, not even plateauing.
00:01:57: They tried it and now they're pulling back.
00:01:59: Declining, so not just slowing down, but actually going backwards.
00:02:02: That completely flies in the face of the industry narrative.
00:02:05: It does.
00:02:05: And Stephanie Gradwell frames the why perfectly.
00:02:09: She says most leadership teams don't really have an AI problem.
00:02:12: OK.
00:02:12: They have an execution problem.
00:02:14: The models, they work beautifully in the lab, you know, and a proof of concept.
00:02:17: In a clean room.
00:02:18: Exactly.
00:02:19: But then you drop it into the messy reality of business workflows and legacy systems.
00:02:25: and the core business KPIs, they don't change.
00:02:28: They just can't seem to bridge that gap from a successful pilot to actual enterprise value.
00:02:34: So the challenge isn't building the shiny new tool.
00:02:38: It's redesigning the entire factory floor to actually use it.
00:02:41: That's the perfect analogy.
00:02:43: And the solution we're seeing mentioned everywhere is a return to basics.
00:02:46: Start small, stay focused, iterate.
00:02:48: Right.
00:02:49: Kaushik Bay suggests this very practical four-step approach.
00:02:52: First, you rigorously map your workflow pain points.
00:02:55: Okay, standard stuff.
00:02:56: Right.
00:02:56: But then second, you pick a high-frequency low-risk task, something that won't, you know, crash the business if it fails.
00:03:01: Third, you automate it with a controlled agent.
00:03:04: And then finally, you measure the impact for a very strict thirty days.
00:03:07: Wait, but how is that different from what we were doing five years ago?
00:03:10: You know, mapping pain points, running thirty-day tests.
00:03:14: That's a great question.
00:03:16: The difference is the type of automation.
00:03:18: That this isn't just scripting a task.
00:03:20: It's about introducing a controlled agent that can operate with some autonomy.
00:03:24: And if you get that quick, quantifiable ROI win, that's your first.
00:03:30: internal success story.
00:03:32: And that's what unlocks the budget.
00:03:33: That's what unlocks the budget and the will for a bigger transformation, which brings us all back to leadership.
00:03:39: Neil Harrison reflected on this.
00:03:41: He said the failure in twenty twenty-five adoption wasn't a lack of ambition.
00:03:45: It was a lack of leadership capability.
00:03:47: Exactly.
00:03:48: Real
00:03:48: success means redesigning the work itself for human AI collaboration, not just trying to bolt a new tool into a broken process and that change management, that's the hard part.
00:03:58: That idea of integrated systems, of systems that collaborate, that's a perfect segue into our next big theme.
00:04:03: Absolutely.
00:04:04: The focus has really moved away from just a single model's performance and toward what everyone's calling the agentic shift and orchestration.
00:04:11: Okay, the word agent is everywhere right now.
00:04:15: What does that actually mean for someone in tech?
00:04:17: Well, Breesh Gashore Pandey points out that the foundational models, the LLMs, they're becoming commodities.
00:04:23: They're interchangeable.
00:04:24: Right.
00:04:24: The real value, and so the real engineering focus, has shifted to what he calls agentic stacks.
00:04:31: These are the system layers that handle things like memory, complex execution, and the orchestration that controls the whole process.
00:04:39: This is where it gets really interesting for me.
00:04:41: Nandan Malakara defines what agents do differently.
00:04:45: They plan, so they break big tasks into smaller steps.
00:04:49: They coordinate systems, they learn from feedback, and this is the key part, they intelligently invoke other tools to act.
00:04:56: They're not just tools, they're intelligent partners.
00:04:58: And this isn't just hype, it's delivering real economic results.
00:05:01: Richard Turen reported on an anthropic survey.
00:05:04: It found eighty percent of technical leaders are already reporting measurable economic returns from agents.
00:05:09: Eighty percent, that's huge.
00:05:11: It is.
00:05:12: The most mature area, no surprise, is software development.
00:05:15: Adoption there is at ninety percent.
00:05:18: We're seeing examples like Neobank N-twenty-six achieving seventy percent automation in certain processes.
00:05:24: Okay, eighty percent measurable return.
00:05:26: Are we talking about small pilot projects here?
00:05:28: or is this scalable, sustained ROI?
00:05:31: That is the big question.
00:05:33: And the industry's answer is to build the infrastructure to make it scalable.
00:05:37: That's where orchestration comes in.
00:05:39: You can't just have a fleet of autonomous agents running wild.
00:05:42: You need a control plane.
00:05:43: You need a control plane.
00:05:45: Vishal Mishra announced Microsoft's agent three sixty five framing it exactly as that the new enterprise control plane
00:05:52: control plane.
00:05:53: so the foundational models are becoming interchangeable.
00:05:55: isn't creating a Microsoft controlled highway just trading one kind of vendor lock-in for another?
00:06:00: that's
00:06:00: the strategic risk for sure but it shows how urgent the need is for a system of systems they call it the highway traffic control and safety system for all your AI agents.
00:06:10: the point is leaders know they need centralized management to scale this safely
00:06:14: because that creates a massive amount of risk.
00:06:16: Massive risk.
00:06:18: Rob Vanderveer flagged cross user prompt injection as the single biggest threat in agentic AI right now.
00:06:24: So what is that exactly?
00:06:26: Imagine
00:06:26: you have a privileged automated agent scanning say a public forum.
00:06:32: An attacker can sneak a malicious instruction, the prompt injection, into a totally normal looking comment.
00:06:38: And because the agent is designed to follow instructions.
00:06:40: It follows the malicious one.
00:06:42: That instruction could tell it to leak sensitive data, delete files, change configurations, anything.
00:06:48: Wow.
00:06:48: So the recommended defense is pretty harsh, but it's necessary.
00:06:52: You have to limit an agent's access to untrusted data dynamically.
00:06:56: You have to treat all agents as potential bad actors and operate with zero trust.
00:07:01: That security concern flows right into the regulatory landscape, which is trying to build a cage around all this
00:07:06: power.
00:07:06: Let's turn to that.
00:07:07: governance, trust, and the regulatory reality.
00:07:11: The EU AI Act is really anchoring this whole global debate.
00:07:14: Yeah, but there's a lot of, let's say, paranoia out there.
00:07:18: Before teams panic and start huge compliance projects, Alex Sureski gives some simple advice.
00:07:23: Just as the first question, have you checked whether the AI Act even applies to your system at all?
00:07:28: So many teams just slap AI enabled on a marketing page and create their own regulatory headache.
00:07:34: Right.
00:07:34: You have to look at the actual definitions.
00:07:36: Kimberly Friesen shared some key indicators for figuring this out.
00:07:40: A regulated system has to be machine-based, have a certain level of autonomy, and that threshold can be pretty low.
00:07:46: Okay.
00:07:47: And most importantly, it has to have a real influence on a physical or virtual environment.
00:07:53: A lot of simple prediction tools just fall outside of that scope.
00:07:56: But if your system does fall into that high-risk category, things like credit scoring, underwriting, hiring tools, what are the immediate requirements?
00:08:04: Hegbert Weatherborne lists them out.
00:08:05: You need rigorous risk management, demonstrable data quality controls, detailed technical documentation, mandated human oversight, and verifiable accuracy.
00:08:14: It's non-negotiable.
00:08:15: And the penalties are serious.
00:08:17: We're talking up to fifteen million or three percent of your global annual revenue.
00:08:21: So what's the blueprint for actually doing all that?
00:08:24: Gurdiep Singh Chopra positions ISO forty two thousand one as that blueprint.
00:08:30: It's the international standard for an AI management system.
00:08:33: It basically operationalizes the law.
00:08:35: It tells you how to systematically meet those legal obligations.
00:08:38: And for all of us who rely on third-party vendors for this stuff, what do we need to be demanding from them?
00:08:44: Patricia Bertini says you have to proactively demand responsible AI documentation.
00:08:50: That means getting training data summaries, a clear copyright compliance policy, and model documentation forms.
00:08:56: If a vendor can't give you that.
00:08:58: It's a huge red flag.
00:08:59: It's a huge red flag.
00:09:00: And it means the legal exposure falls squarely back on you, the client.
00:09:03: What's fascinating is the global fragmentation here.
00:09:06: Lexi Reese discussed the new one rule AI executive order in the U.S.
00:09:10: from the Trump administration.
00:09:11: It favors voluntary standards.
00:09:13: A completely different philosophy from the EU.
00:09:15: Exactly.
00:09:16: And that creates a massive strategic headache for global companies.
00:09:19: The reality is you have to build your systems to the strictest standard, which is the EU AI Act, and then kind of work backwards from there.
00:09:25: So the EU is setting the global bar effectively.
00:09:29: They
00:09:29: are.
00:09:29: And then you have countries like Italy.
00:09:31: According to Bethesimo Novec, going even further, they're passing their own national laws to criminalize harmful deep fakes, which goes beyond even the EU framework.
00:09:41: It's a fractured landscape.
00:09:43: It really is.
00:09:44: And all that complexity hits the engineering teams directly.
00:09:48: Which brings us to our next theme.
00:09:50: Right.
00:09:50: Engineering, IP, and the future of work.
00:09:53: The tools are changing so fast, the roles have to change with them.
00:09:57: Absolutely.
00:09:57: For developers, the shift is huge.
00:10:00: Dr.
00:10:00: Ingir Ganem described it as moving from the busy bee who does hands-on coding to the air traffic controller who oversees, reviews, architects, and orchestrates the whole system.
00:10:10: The value isn't in how many lines of code you type anymore.
00:10:13: And that's all being driven by AI coding assistance.
00:10:16: Hague Lupesco shared some stats.
00:10:17: Ninety percent adoption reported.
00:10:19: Developers shipping sixty percent more pull requests.
00:10:22: Sounds amazing.
00:10:23: It does, but here's the catch.
00:10:24: Quality metrics like change failure rate, they don't consistently improve.
00:10:28: So adoption doesn't equal impact.
00:10:30: Not automatically?
00:10:32: You need a structured rollout and proper enablement to make sure you're getting high quality alongside that high velocity.
00:10:38: Which means the old problems haven't gone away.
00:10:40: Nick Kay pointed out that classic vulnerability, SQL injection, cross-site scripting are still common.
00:10:46: And worse, the LLM assistance can actually amplify those bad patterns if the developer doesn't have the fundamental knowledge to spot insecure code in the output.
00:10:55: Wow.
00:10:56: Okay, and adding another layer of risk to all this is the IP battle brewing in Europe.
00:11:02: Yes, the details from this landmark ruling in Germany from the Munich regional court against open AI They're absolutely vital for anyone in this space.
00:11:11: So what does the German court actually rule?
00:11:14: It was a case brought by Jima, the German music rights organization.
00:11:17: The court found that the AI's memorization and then reproduction of copyrighted song lyrics was infringement.
00:11:23: And they rejected the usual defense.
00:11:25: They rejected the text and data mining defense.
00:11:28: The argument that training on copyrighted data is fair use.
00:11:31: It didn't hold up.
00:11:32: The court basically said if the model reproduces the material, it means it memorized it and that's infringement.
00:11:38: which creates a ton of legal uncertainty because that ruling goes directly against a recent decision in the UK on the same issue.
00:11:45: Exactly.
00:11:46: It fractures the legal landscape for generative AI across Europe.
00:11:50: Let's zoom out for a second to the macroeconomic impact.
00:11:53: Envato Ordax reported a staggering statistic.
00:11:56: AI infrastructure was responsible for ninety-two percent of US GDP growth in the first half of twenty twenty-five.
00:12:03: Ninety-two percent.
00:12:04: That's not just a tech story.
00:12:05: That's the entire macroeconomic growth engine.
00:12:08: It's immense.
00:12:09: But Zemina Ahmad warns that it's leading to what she calls jobless growth.
00:12:13: Productivity is skyrocketing.
00:12:15: But employment in IT and business services is actually declining.
00:12:20: Companies are quietly pursuing a strategy of more output with fewer people, and that's hitting junior staff and freelancers the hardest.
00:12:26: A really sobering perspective.
00:12:28: It connects that initial execution gap we talked about to this ultimate goal of massive, efficient, and potentially job-displacing growth.
00:12:34: We've covered
00:12:35: a lot of ground, so to summarize this whole deem dive, the AI battleground It's not about the models anymore.
00:12:41: It's about the systems, the governance, and the organizational change you need to actually get real value.
00:12:47: Right.
00:12:47: The focus now is all on control, security, and global compliance.
00:12:53: And if we think about the long-term human cost of this, there's an interesting point from Borogur.
00:12:58: What's that?
00:12:59: He raised the risk of Gen Z. of younger workers heavily relying on AI for critical social health and career decisions.
00:13:07: And for us, as the professionals building and deploying these systems we just said need to be treated with zero trust, that raises a really important question.
00:13:14: Which is?
00:13:15: What kind of organizational liability are we creating when we inherit a workforce that's potentially de-skilled in critical thinking and independent judgment?
00:13:24: That vulnerability created by a reliance on the very systems we're trying to lock down, that's something every leader needs to be thinking about urgently.
00:13:31: If you enjoy this deep dive, new episodes drop every two weeks.
00:13:34: Also check out our other editions on ICT and tech, digital products and services, cloud, sustainability and green ICT, defense tech and health tech.
00:13:43: Thank you for joining us for this deep dive.
00:13:45: Don't forget to subscribe.
New comment