Best of LinkedIn: Artificial Intelligence CW 45/ 46
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways.
This edition focuses overwhelmingly on the rapidly evolving landscape of artificial intelligence, highlighting the critical importance of digital trust, ethics, and strong governance for successful innovation and leadership. Several authors note the transition from AI experimentation to scalable, industrialised execution, with a growing emphasis on agentic AI systems that autonomously orchestrate complex business processes. A key theme across regions, particularly Europe, is the debate surrounding regulation, such as the EU AI Act, with concerns that proposed changes might hinder competitiveness and increase legal uncertainty, even as adoption accelerates across sectors like finance and manufacturing. Finally, while reports indicate a high return on investment for early adopters, experts caution about significant challenges, including high failure rates, security vulnerabilities, poor data quality, and the necessity to move beyond mere AI consumption to focus on measurable business value and human-centric outcomes.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This deep dive is provided by Thomas Allgeier and Frennus, based on the most relevant LinkedIn posts about artificial intelligence in calendar weeks forty-five and forty-six.
00:00:10: Frennus supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts, and competitor strategies, so product teams and strategy leaders don't just react, but shape the future of AI.
00:00:25: Welcome.
00:00:25: You know, looking at the conversations over the last couple of weeks, the signal was just crystal clear.
00:00:30: AI has, well, it's left the lab.
00:00:33: It really
00:00:34: has.
00:00:34: The whole discussion has shifted from can we build this stuff to something more like, OK, now how do we scale it?
00:00:40: How do we govern it?
00:00:41: And how do we actually make money from it?
00:00:43: And that transition from the peer experiment to a full operational scale is creating some real tension.
00:00:48: Oh, absolutely.
00:00:49: It's fascinating.
00:00:50: And for some, it's a bit painful.
00:00:51: That tension is exactly what we're going to unpack for you.
00:00:54: The sources we've analyzed really point to three big currents right now.
00:00:58: First is this move toward self-learning, what people calling agentic AI.
00:01:03: The
00:01:03: new frontier, really.
00:01:04: Yeah,
00:01:04: it is.
00:01:05: Then there's the second piece, which is this dawning realization that governance and trust are completely non-negotiable.
00:01:12: You can't bolt them on later.
00:01:13: Nope.
00:01:14: And finally, there's this ongoing, very real struggle for organizations to prove they're getting a measurable production level ROI.
00:01:22: Right.
00:01:22: Let's start there.
00:01:23: Let's start with the ROI challenge, because the data is interesting.
00:01:26: It suggests AI is already paying off, but maybe not how people thought it would.
00:01:30: Oh, so?
00:01:31: Well, success isn't just about the tech anymore.
00:01:33: It really hinges on disciplined execution and, frankly, business credibility.
00:01:39: We saw some incredibly positive numbers, especially coming out of the US.
00:01:42: I think it was Kristoff Filschette, who noted that seventy-five percent of enterprises there are already seeing a positive ROI from Gen AI.
00:01:51: That's a huge number.
00:01:53: It is.
00:01:53: And what's really interesting is that smaller, more agile companies are often seeing even higher returns.
00:01:59: They just don't have that legacy weight holding them back.
00:02:01: And that adoption is definitely not just a U.S.
00:02:04: story.
00:02:05: Carrington Mellon shared data that was pretty eye-opening.
00:02:09: Firms in the Dubai International Financial Center, the DIC, saw their AI usage surge to fifty-two percent.
00:02:16: And Gen AI specifically.
00:02:17: Tripled,
00:02:18: year on year.
00:02:19: So the money's flowing, the tools are being adopted everywhere.
00:02:22: Okay, so if seventy-five percent of companies are seeing a positive ROI, why does it still feel like we're all struggling with this?
00:02:30: This is where that execution gap you mentioned really comes into focus.
00:02:33: It's
00:02:33: the core of the problem.
00:02:34: Jaycee Van Oest had a great observation on this.
00:02:37: that while most companies are using AI tools now.
00:02:39: Right, they've bought the licenses, they've turned them on.
00:02:41: Exactly, but very few have taken that next crucial step of actually redesigning their core business workflows around them.
00:02:48: They're just bolting this shiny new AI layer on top of, you know, decades of messy broken processes.
00:02:56: And that just limits the value you can get.
00:02:58: It's like putting a jet engine on a horse-drawn cart.
00:03:00: Yeah, you can't transform a system that's fundamentally flawed just by making it go faster.
00:03:05: Jeff Wincher really drove this point home.
00:03:08: He said, execution, not the technology, is what separates the winners and
00:03:12: losers.
00:03:13: And the failure rate is still high.
00:03:14: It is.
00:03:15: A staggering thirty-nine percent of AI projects are still missing their targets.
00:03:20: And it's not the AI's fault.
00:03:21: It's bad data and weak internal processes.
00:03:24: The biggest barrier is just inertia.
00:03:27: And this gap, it leads to some really weird ways of measuring success, doesn't it?
00:03:31: Oh, definitely.
00:03:32: David Lenthicum had this fantastic critique of organizations that celebrate AI consumption.
00:03:38: I saw that, like giving out plaques for using more processing power.
00:03:41: Yes.
00:03:42: Instead of focusing on actual pragmatic outcomes that, you know, move the business forward, it's confusing activity with achievement.
00:03:49: And that focus on raw efficiency is now completely changing what clients expect.
00:03:52: This is a huge shift.
00:03:53: I think so too.
00:03:54: Joan Domingo's called it the AI discount.
00:03:56: The
00:03:56: AI discount, I like that.
00:03:57: Clients are actively going to their service providers and demanding lower prices.
00:04:02: Their argument is, look, if AI is making you faster and more efficient, I want those savings passed on to me.
00:04:08: Wow.
00:04:09: Just think about what that does to a traditional business model.
00:04:12: Efficiency used to be a premium you could charge for.
00:04:14: And now it's just table stakes.
00:04:16: It's expected.
00:04:16: Right.
00:04:17: The value is no longer how fast you do something.
00:04:19: The real value is the unique strategic outcome you deliver that the client couldn't get on their own.
00:04:25: The whole game is changing.
00:04:27: And
00:04:27: this demand for unique outcomes is, well, It's the perfect.
00:04:30: that way into the next big evolution we're seeing.
00:04:33: The move to agentic AI.
00:04:35: We're going way beyond simple task automation now.
00:04:38: We
00:04:38: really are.
00:04:38: We're talking about intelligent self-directing systems.
00:04:41: Shrini Kasturi put it perfectly.
00:04:43: He said, the real momentum is in building a genetic self-learning systems that orchestrate entire business ecosystems.
00:04:50: So it's not just an AI responding to my prompt anymore?
00:04:53: No, not at all.
00:04:54: We're talking about an AI that can set its own goals, break them down into smaller tasks, collaborate with other AIs, and then execute a whole multi-step plan autonomously.
00:05:04: That's a huge leap.
00:05:06: And Nandan Malakar made a really bold prediction on this, saying that these autonomous agent networks could replace most traditional apps within a decade.
00:05:15: I
00:05:15: mean, that's a fundamental rewrite of enterprise software, if he's right.
00:05:18: So what's the practical difference for people trying to understand this between, say, chat GPT and an AI agent?
00:05:26: Dr.
00:05:26: Ralph Miller clarified this really well.
00:05:28: He said, standard gen AI is fantastic for reactive single step tasks.
00:05:33: You ask it to summarize a document, it does.
00:05:35: But agents are for complex, multi-step, goal-oriented processes.
00:05:40: His example was ING, using agents to speed up mortgage approvals.
00:05:45: That's not one task.
00:05:46: It's dozens of data checks, communications, decisions.
00:05:49: All
00:05:49: orchestrated by the agent.
00:05:50: Exactly.
00:05:51: So if you're a business trying to build this, you need the right tools.
00:05:54: I saw Tariq Munir laid out some of the key frameworks people are using.
00:05:57: Yeah, the
00:05:57: ecosystem is growing fast.
00:05:58: You've got things like AutoGen for when you need multiple AIs to collaborate and talk to each other.
00:06:03: And CrewAI.
00:06:04: That's more for role-based teamwork, like creating a virtual project team with an analyst, a writer, a reviewer, and so
00:06:10: on.
00:06:10: And LOM Index for the heavy... data lifting?
00:06:13: For when you're dealing with just massive knowledge bases, yes.
00:06:16: But all this autonomy, it brings a whole new category of risk.
00:06:21: It has to.
00:06:21: Reid Blackvin raised a huge red flag here.
00:06:24: He called it the threat of agentic monopolies.
00:06:27: What's the scenario there?
00:06:29: Well, imagine your personal assistant.
00:06:31: AI is owned by, say, Amazon.
00:06:34: He warns that agent will inevitably start making decisions that optimize for Amazon's financial interests.
00:06:41: not necessarily yours.
00:06:43: So it might recommend an Amazon product even if a better, cheaper one exists elsewhere.
00:06:47: Precisely.
00:06:48: And that raises a massive governance question.
00:06:50: If that agent acts on its own and makes a biased decision that hurts you or a competitor who's liable, is it you forgiving the prompt or the company that designed the algorithm?
00:06:59: And that question leads us straight into our next theme, doesn't
00:07:02: it?
00:07:02: It's the only place it can lead.
00:07:03: Governance, trust, and regulation.
00:07:06: These aren't just nice to haves.
00:07:08: They are the absolute foundation.
00:07:10: Lawrence Etup meant so well.
00:07:11: He said digital trust is the ultimate currency of progress.
00:07:15: And you only get that if governance and ethics are designed in from the very beginning.
00:07:20: And companies are finally getting the message, but maybe not for the reasons we'd hope.
00:07:24: Paul Mitchell's take was that most organizations are adopting AI governance because of forcing functions.
00:07:31: Not because of some deep internal belief in ethics, but because the market, the regulators, and the liability risks are making noncompliance just impossible.
00:07:42: It's a pragmatic choice, not a moral
00:07:44: one.
00:07:44: And that means you can't afford to wait.
00:07:46: I saw a very strong warning from Patricia Bertini on this.
00:07:49: She said responsibility cannot be retrofitted.
00:07:51: That's such a key point.
00:07:52: If your AI system isn't designed responsibly from day one, with all the right documentation and human oversight built in, it's already non-compliant.
00:08:00: The risk is baked right into its code.
00:08:03: So looking at the regulatory side, what's happening there?
00:08:05: Well, in the EU, Oliver Patel mentioned the upcoming digital omnibus package, which is supposed to simplify things.
00:08:12: But Louisa Jurowski flagged some serious concerns about a leaked draft.
00:08:16: Uh-oh,
00:08:17: what's the issue?
00:08:18: Apparently they've removed the direct obligation for companies to ensure AI literacy among their staff.
00:08:24: And why is that so dangerous?
00:08:26: Because if businesses aren't explicitly required to train their people on AI's limits and compliance rules,
00:08:32: it
00:08:32: creates massive legal uncertainty.
00:08:35: How can you ensure compliance if the people using the tools don't understand the
00:08:39: rules?
00:08:39: And the regulators themselves are under the microscope.
00:08:42: Remy Taken reported on a complaint filed against the European Commission.
00:08:45: For using chat GPT and public documents, right.
00:08:48: Right.
00:08:49: Without telling anyone.
00:08:50: Yes.
00:08:51: It just shows that public bodies have to be transparent about this stuff if they want to maintain any kind of trust.
00:08:56: We're also seeing real accountability being forged in the courts.
00:09:00: Thomas Hupner reported on a big win for Jima, the music rights group, against open AI in Germany.
00:09:06: On copyright grounds.
00:09:07: Exactly.
00:09:08: The court in Munich found open AI had infringed copyright by essentially memorizing song lyrics from its training data and then reproducing parts of them in chatbot responses.
00:09:17: That's such a huge precedent.
00:09:19: It reinforces that the developers are accountable for what their models output.
00:09:24: Absolutely.
00:09:25: So let's shift gears a bit and talk about the, let's call it the execution reality, the risks that come with putting this stuff into production at scale.
00:09:34: Starting with the big one, cybersecurity.
00:09:36: For sure.
00:09:37: Patrick Federer detailed the really dark side of this.
00:09:40: His data showed that one in every fifty four gen AI prompts from inside a corporate network carries a high risk of data exposure.
00:09:48: One in fifty four.
00:09:49: That is, that's a number that should keep every CISO awake at night.
00:09:53: And he warned.
00:09:54: we're entering an era of autonomous cybercrime.
00:09:56: We're talking about AI-driven attacks, malware that adapts on its own, and even synthetic insiders.
00:10:02: AI agents designed to act like employees to steal data from the inside.
00:10:06: It's terrifying stuff.
00:10:07: And the tools we're using to build things are themselves flawed.
00:10:10: John Bruton was citing Forrester data.
00:10:12: The forty-five percent failure rate again.
00:10:14: Yes, a systemic forty-five percent failure rate in the models and, crucially, that forty-five percent of AI-generated code contains exploitable security flaws.
00:10:23: So you absolutely cannot just trust the code the AI writes, especially not for your critical systems.
00:10:28: Which brings us to that big debate about jobs.
00:10:31: This idea that AI will just replace all the software developers.
00:10:34: The whole vibe coding is the future narrative.
00:10:37: Yeah.
00:10:38: Andreas Horn had what he called an unpopular opinion that just pushed back so hard against that.
00:10:45: And he's right, too.
00:10:45: He argues that anyone who says that just doesn't understand the reality of enterprise IT.
00:10:50: We're not talking about a clean slate.
00:10:52: We're talking about, you know, a thousand plus entangled legacy
00:10:56: apps.
00:10:56: Decades of tech debt.
00:10:58: Cascading failures.
00:10:59: You can't fix that with a simple prompt.
00:11:01: No.
00:11:02: AI can write a function, but it can't manage that level of complexity and context.
00:11:07: Professor Daniel Russo confirmed it.
00:11:09: The job isn't vanishing, it's just shifting.
00:11:11: From how to type it,
00:11:12: to the really hard judgment-based questions of what to build, why it works, and how to integrate it responsibly.
00:11:19: Developers are becoming systems orchestrators.
00:11:21: So finally, let's zoom out.
00:11:23: What does this all mean for Europe's competitive position in the world?
00:11:27: The sources here actually suggest a really interesting strategy is forming.
00:11:31: It seems like it.
00:11:32: Emanuel de CastaƱo argued that Europe's AI ecosystem is already incredibly talent rich.
00:11:39: It has the people to challenge has been industrializing it.
00:11:42: And now the infrastructure for that is being built.
00:11:45: Paul Storm detailed that huge one billion euro partnership between Deutsche Telekom and NVIDIA.
00:11:51: For the industrial AI cloud in Germany, right?
00:11:53: Exactly.
00:11:54: The goal is sovereign AI infrastructure, specifically to help Germany's manufacturing backbone the middle stand to automate and compete.
00:12:02: In that strategic infrastructure investment, it lines up perfectly with Europe's focus on policy.
00:12:07: Roger Crawford made the point that trustworthy AI isn't just a compliance headache.
00:12:12: It's a strategic imperative.
00:12:13: He used Ireland as an example, didn't he?
00:12:15: He did.
00:12:16: Ireland leads Europe in the AI impact index, which shows a direct link between having responsible AI practices and getting positive business outcomes.
00:12:24: And Frank Coomley pointed to Catalonia's AI strategy, twenty thirty as another example.
00:12:29: It's a huge billion euro investment focused specifically on responsible and sovereign AI.
00:12:36: They're betting on trust as their key differentiator.
00:12:40: Now, we do have to add a little note of financial caution here.
00:12:43: Ben Torben Nielsen suggested we might be in a bit of an AI bubble financially speaking.
00:12:47: Citing people like Michael Burry and Peter Thiel selling off their NVIDIA stock.
00:12:52: Right, but he made a really important distinction.
00:12:54: The stock market might be volatile, but the underlying technological momentum is real and it's transformative.
00:13:00: So the takeaway for you, the listener, has to be... Don't let short-term market jitters derail your long-term essential transformation strategy.
00:13:09: That's
00:13:09: the perfect summary.
00:13:10: To wrap it all up, AI has, as Grace Gao noted, its move from being a future concept to being present-day infrastructure.
00:13:17: It's here.
00:13:17: But getting the full value from it, that doesn't hinge on the raw technical brilliance anymore.
00:13:22: It hinges on discipline, responsible execution, and on transparent governance.
00:13:26: And if you can get that right, the reward is immense.
00:13:30: Doug Shannon suggests AI is enabling something like tree cognition, the ability to see and predict real-world events before they actually happen.
00:13:37: That's an incredible advantage.
00:13:39: It is.
00:13:40: But that power has to be guided by human leadership.
00:13:43: And as Faisal Hulk reminded us, true leadership in this age of speed sometimes means having the courage to slow down for deep thinking, to protect trust, and to prioritize long-term governance over short-term wins.
00:13:56: If you enjoyed this deep dive, new episodes drop every two weeks.
00:13:59: Also check out our other editions on ICT and tech, digital products and services, cloud sustainability in green ICT, defense tech and health
00:14:08: tech.
00:14:08: Thank you for joining us and be sure to subscribe so you don't miss our next deep dive.
New comment