Best of LinkedIn: Artificial Intelligence CW 37/ 38
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways
This edition provides a comprehensive overview of the accelerating Agentic AI revolution and its transformative impact across various sectors, focusing heavily on business implementation. A central theme is the distinction between Generative AI/LLMs (Large Language Models), which primarily generate content, and AI Agents/Agentic AI, which are autonomous, goal-driven systems capable of complex planning and action within enterprise workflows, eliminating significant manual work, such as post-meeting tasks. While the sources highlight immense potential for productivity gains and efficiency, they simultaneously address critical challenges, including the high failure rate of GenAI pilots, the urgent need for ethical governance and compliance (e.g., the EU AI Act), and the necessary cultural and leadership shifts required to successfully integrate AI into human-centric operations. Finally, many experts predict that AI's evolution from simple tools to autonomous agents will fundamentally redefine jobs, with some sources warning of economic disruption and job replacement, while others advocate for augmentation and upskilling.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This episode is provided by Thomas Allgaier and Frennus, based on the most relevant LinkedIn posts about artificial intelligence in calendar weeks, thirty-seven and thirty-eight.
00:00:09: Frennus supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts, and competitor strategies, so product teams and strategy leaders don't just react, but shape the future of AI.
00:00:24: Welcome to this deep dive.
00:00:25: You know, if we look across the executive conversations over the last couple of weeks, the chatter has really shifted quite a bit.
00:00:31: Oh yeah, how so?
00:00:32: Well,
00:00:32: we seem to be moving beyond just like... standalone LLM experiments.
00:00:37: People are focusing much more intensely now on execution, scalable operating models, and crucially robust governance.
00:00:44: Right.
00:00:45: So our mission today is really to distill those complex conversations for you, the ICT and tech professional listening.
00:00:50: We want to cut through the noise, give you the most impactful AI insights and trends we saw across LinkedIn during those two weeks, thirty seven and thirty eight.
00:00:57: Sounds good.
00:00:58: Let's maybe start with what feels like a really revolutionary shift in what AI can actually do.
00:01:03: Okay, let's unpack this first one.
00:01:05: The biggest operational shift we observed, I think, is this dramatic ascent of agentic AI.
00:01:12: We've finally moved past those simple chatbots that just, you know, write text to AI agents that actually work.
00:01:19: They execute multi-step processes kind of autonomously.
00:01:23: This feels like where the real ROI for productivity starts.
00:01:26: That is the foundational change, isn't it?
00:01:28: I mean, as Alquac K-Pen highlighted, agents don't just wait for the next prompt.
00:01:33: They plan the sequence of steps.
00:01:35: They act inside the actual business tools you already use.
00:01:39: And importantly, they self-check the outcome against the original goal.
00:01:42: So they move you from just a prompt to hopefully a guaranteed result, often without needing a human step in between.
00:01:48: So we're talking about an architectural difference here, it sounds like, not just a simple feature update.
00:01:52: Precisely.
00:01:53: Brish Khashor Pandy actually clarified the stack needed for this.
00:01:57: You know, LLMs are just the language engine.
00:01:59: Think of it as the brain for translation and generation.
00:02:02: But agentic AI, that's the autonomous decision maker.
00:02:05: It's coordinating complex goals across different systems.
00:02:08: And we're already seeing this translate into some massive efficiency gains, right?
00:02:13: Shubham Sabu shared a really compelling use case.
00:02:16: Gamma's AI agent Automates entire meeting workflows
00:02:19: the whole thing.
00:02:20: Yeah, to raise polished presentations.
00:02:22: since follow-up emails all in seconds Subu reported going from like two hours of post-meeting admin work Down to literally zero.
00:02:33: That's not just marginal improvement.
00:02:34: That's basically eliminating overhead
00:02:36: and this kind of while radical operational streamlining seems pretty scalable, too.
00:02:41: Elliott Garefa detailed their experience implementing just four specific AI agent.
00:02:46: Yeah, what were they?
00:02:47: A backlog agent, a retro agent, a development agent, and a marketing analytics agent.
00:02:52: And the result, they eliminated, get this, eighty percent of their team meetings.
00:02:56: Eighty percent.
00:02:57: Eighty percent.
00:02:58: And saw a reported three-fold boost in overall performance.
00:03:01: It really sounds like the promise of full automation is maybe finally here, but what's the catch for deployment?
00:03:07: There's always a catch, right?
00:03:08: Huh, usually.
00:03:10: Well, Eduardo Ordex stressed that shipping successful agents means you have to focus on the workflows, and importantly, designing these human plus agent collaboration loops.
00:03:20: Not just building fancy features, if it doesn't slot into how people actually work.
00:03:25: it's going to fail.
00:03:26: Right, integration.
00:03:26: Integration is everything.
00:03:28: Albionic and K-Pen reinforce that with some pragmatic advice for any team starting out.
00:03:33: Don't try to boil the ocean, pick one job, wire up one data source, add one action, set one review rule, get that first win.
00:03:41: That keeps the trust loot manageable.
00:03:43: I think we need to pause on something Orin Greenberg pointed out because it feels vital for our audience here.
00:03:47: He made a distinction.
00:03:49: Agentec AI gets an objective, but the operational steps Those aren't explicitly defined.
00:03:55: How does that change things like MLLs compared to traditional deterministic
00:04:00: automation?
00:04:00: Well, it creates a fundamentally different deployment challenge, doesn't it?
00:04:04: Deterministic automation is linear.
00:04:06: You can audit it step by step.
00:04:08: Pretty much.
00:04:08: A gentek AI needs far more sophisticated monitoring and evaluation because the agent itself figures out the path.
00:04:14: It might use a tool in a way you didn't anticipate to hit that objective.
00:04:18: And that's exactly why human review and building that trust needs to be engineered into the loop right from day one.
00:04:24: Okay, let's pivot a bit.
00:04:26: Because all this talk about successful agents, while it runs headlong into this critical operational problem people mentioned repeatedly on LinkedIn, the pilot graveyard.
00:04:35: Yes, the pilot graveyard.
00:04:36: Armand Ruiz.
00:04:37: asked directly, you know, do companies have a real AI strategy, or are they just running AI theater?
00:04:44: Especially if use cases aren't actually live in production, tied to the P&L, or tracking measurable adoption.
00:04:50: And this challenge, it's starkly supported by the numbers.
00:04:53: Kassi Krozerkoff and Keith Ferrazzi both pointed to that MIT report.
00:04:57: It showed that a staggering ninety-five percent of Genii initiatives,
00:05:01: ninety-five percent,
00:05:03: ninety-five percent are failing to deliver measurable business value.
00:05:07: The tech works often, but the leadership and the strategy, maybe not so much.
00:05:12: Wow, that ninety-five percent failure rate is huge.
00:05:15: So where's the money going then?
00:05:17: And why is it working?
00:05:18: Well, the takeaways from that same MIT report shared by Keith Ferrazzi show something interesting.
00:05:24: The highest ROI actually comes from back office automation.
00:05:27: OK.
00:05:28: Yet more than half of AI budgets are still being directed towards sales and marketing.
00:05:34: That looks like a pretty massive strategic disconnect.
00:05:36: Definitely sounds like it.
00:05:37: And for our ICT audience, there's another critical finding in there.
00:05:42: Purchased off-the-shelf solutions succeed twice as often as internal builds.
00:05:47: Really?
00:05:47: Twice as often.
00:05:48: Yeah.
00:05:49: Which raises an important question, right?
00:05:51: Are internal teams maybe struggling with scale or integration, or are they just chasing the novel tech without a clear business need driving it?
00:05:59: Tariq Munir echoed this sentiment beautifully, I thought.
00:06:02: He said, AI shouldn't be the project.
00:06:06: The project should be the business problem you're trying to solve.
00:06:08: Exactly.
00:06:09: You need that digital mindset first.
00:06:11: to use the right technology for the task, not just the newest shiny object.
00:06:16: Now, the strategy deficit, it leads directly into this massive tension we're seeing around the future of work.
00:06:23: Simon Taylor stated it unequivocally.
00:06:26: He said, due to the speed and cost efficiency of these systems, outsourcing, as we know it, is dead.
00:06:32: Dead.
00:06:32: That's a strong statement.
00:06:33: It is.
00:06:34: He cited examples like AI customer service beating human scores and AI coders outperforming offshore developers at like one-tenth the cost.
00:06:42: That's already disrupting global sourcing models right now.
00:06:45: Okay, but if that's true, if agentic AI can eliminate eighty percent of meanings, like you said, and code ten times cheaper, How do we square that with the extreme caution voiced by other leaders?
00:06:57: Stephen Klein warned that while Jenny I can replace people, it often does it.
00:07:02: Badly.
00:07:02: Right.
00:07:02: With error rates stacking up, potentially reaching thirty percent to nearly eighty percent over time, which erodes trust.
00:07:08: If the error rate is that high, how can we trust that three X performance boost we talked about earlier?
00:07:12: Yeah, that's the core challenge for implementation, isn't it?
00:07:15: Leaders like Sammy Rahall and Christina Guimito from Deloitte, they're emphasizing augmentation, not replacement.
00:07:21: Augmentation.
00:07:22: That high error rate suggests that for really mission-critical tasks, the agent works best as a kind of turbocharged co-pilot.
00:07:30: The human provides that vital final, say, twenty percent of quality assurance and handles the exceptions the AI just can't figure out.
00:07:38: Quality and trust, at the end of the day, have to trump pure cost savings.
00:07:42: Makes sense.
00:07:43: So shifting now to the underlying technical infrastructure that enables this reliable augmentation, Greg Coquillo put it pretty plainly.
00:07:51: The era of standalone LLMs is over.
00:07:55: You just can't achieve enterprise quality with a single isolated model anymore.
00:07:59: The focus now has to be on building business-ready AI systems that integrate memory, orchestration, and multimodal reasoning right into the core operations.
00:08:08: And the key mechanism for ensuring that reliability, that grounding, is ARREC, right?
00:08:12: Retrieval Augmented Generation.
00:08:13: Absolutely.
00:08:14: Karen Kumar, BR noted, this is changing information discovery entirely.
00:08:18: But why is ARREC so foundational for production deployment?
00:08:21: Is it just about stopping hallucinations?
00:08:23: Well, that's part of it, but it's more fundamental.
00:08:25: It's the difference between the AI guessing and actually knowing.
00:08:29: Our ag ensures grounding by forcing the LLM to curate and retrieve information from trusted internal data sources before it generates an answer.
00:08:37: But the infrastructure is getting more complex.
00:08:39: Bezia Kubica summarized the Google AI agent playbook, which stresses this idea of our ag progression.
00:08:46: Okay, walk us through that progression quickly, maybe for our tech audience.
00:08:50: Our ag to graph rag to agentic rag.
00:08:53: What's the technical lift involved there?
00:08:55: Sure.
00:08:55: So Standard Ag is basically straightforward retrieval.
00:08:58: Document goes in, relevant bit comes out.
00:09:01: Moving to Graphrag, though, that's the crucial step for better fact-checking and deeper reasoning.
00:09:05: How so?
00:09:06: It means you're not just recieving documents, you're leveraging GLaF databases to understand the relationships between concepts and entities within your data.
00:09:14: Ah,
00:09:14: the connections.
00:09:15: Exactly.
00:09:16: This is essential for enterprise production, where accuracy demands inferring context, not just summarizing some text.
00:09:23: And then, agentic RA layers that autonomous decision-making we talked about on top of that robust relationship-aware retrieval mechanism.
00:09:31: Got it.
00:09:31: So, when you scaled RA up to this level, governance and compliance immediately become central, don't they?
00:09:37: Oh, absolutely.
00:09:38: Front and center.
00:09:39: Remy Takang pointed out that the EU-AI Act is forcing infrastructure providers, like NVIDIA, for example, to fundamentally evolve.
00:09:47: They aren't just tech vendors anymore, they have to be compliance enablers.
00:09:52: meaning embedding features like guardrails and conformity testing directly into their infrastructure platforms.
00:09:57: that has to be built in.
00:09:58: But is the regulation itself actually consistent enough to build for?
00:10:02: Well, that's a huge question.
00:10:04: Alexander Tilkenau's analysis showed some critical inconsistencies across the EU AI Act's twenty-four language versions, specifically in the definition of deepfake.
00:10:14: That's fascinating.
00:10:15: What's the practical difference there?
00:10:16: between the definitions.
00:10:17: So six of the languages, like German for instance, use a term meaning real resemblance.
00:10:23: But sixteen others use existing resemblance.
00:10:26: Okay,
00:10:27: subtle difference.
00:10:28: It sounds subtle, but for a global company, this creates two distinct compliance worlds.
00:10:34: Because real implies a potentially different, maybe higher threshold of authenticity than just matching something existing.
00:10:42: It basically forces multinational teams to adhere to the strictest interpretation everywhere.
00:10:48: Tricky.
00:10:49: And on the domestic front, the urgency is palpable too.
00:10:52: Dr.
00:10:52: Jothan Malinowski emphasized that Germany, for example, really needs to appoint its national supervisory authority promptly.
00:10:59: Businesses need that regulatory certainty now to avoid the kind of confusion we saw during the rollout of the Digital Services Act.
00:11:05: Yeah, clarity is key.
00:11:07: Businesses need to know who's enforcing what and how now.
00:11:10: And beyond that external regulation, there's the internal piece, right?
00:11:13: Operationalizing ethics.
00:11:15: Riley Coleman called out AI ethics theater.
00:11:17: Yeah, I saw that.
00:11:18: Just
00:11:18: ticking the compliance boxes without implementing real functional systems.
00:11:22: Exactly.
00:11:23: And to move past that theater, he suggested concrete, measurable practices, things like, are you checking if users actually understand when they're interacting with an AI?
00:11:33: Are you actively detecting bias in the outputs?
00:11:36: Are you providing genuine, easy-to-find opt-out options?
00:11:40: Right, making it practical.
00:11:41: Mom, call and reinforce this too.
00:11:43: Arguing ethics has to be a leadership compass that guides innovation, not just some compliance checkbox you tick off.
00:11:49: If you want to build lasting trust and legitimacy with your customers, that
00:12:06: is.
00:12:07: Through software optimization alone, they are delivering ninety percent more tokens for the same GPU compared to just one year ago.
00:12:14: That's almost double the efficiency.
00:12:16: A one point nine X gain.
00:12:18: that influences everything, doesn't it?
00:12:19: Especially the trajectory of chip demand.
00:12:21: But even with that efficiency, the overall demand for capacity is still massive.
00:12:25: The build out continues.
00:12:27: Steve Kovash reported on Microsoft ongoing expansion, including their new, what was it, three billion dollar AI data center in Wisconsin.
00:12:36: Three
00:12:36: billion, wow.
00:12:37: And nationally, we're seeing countries make these huge commitments to specialization as well.
00:12:43: Kyle Mann noted that Germany is building Europe's largest industrial AI cloud.
00:12:47: Industrial AI specifically.
00:12:49: Yeah,
00:12:49: packed with ten thousand NVIDIA GPUs.
00:12:52: It shows a massive commitment to industrial applications with heavy weights like DMW and Mercedes-Benz already involved.
00:12:58: So you see specialization happening alongside the hyperscaler expansion.
00:13:01: Interesting.
00:13:02: And finally, we saw a significant movement in the ecosystem itself.
00:13:05: Robert Perle and Gaurav Mandirata both highlighted that NVIDIA invested five billion dollars in Intel.
00:13:11: NVIDIA investing Five
00:13:12: billion dollars.
00:13:14: Yeah, move.
00:13:14: that signals some really interesting shifts in key hardware partnerships and maybe perhaps a strategic hedging of future supply chains as this demand just continues to explode.
00:13:23: Okay, so this deep dive has clearly shown us, I think, that AI is moving incredibly quickly.
00:13:28: It's past the speculative testing phase and is really starting to redefine core enterprise operations driven heavily by this more mature, agentic AI.
00:13:37: Right.
00:13:37: but also subject to intense scrutiny now on strategy and critically governance.
00:13:42: Yeah, and the real challenge, I think, for you, the listener, is reconciling two powerful and maybe seemingly opposing forces here.
00:13:51: On one hand, you've got this adoption curve that's steeper than anything we've ever seen.
00:13:55: Like the chat GPT numbers.
00:13:56: Exactly.
00:13:57: OpenAI's report, which William highlighted, revealed chat GPT hit seven hundred million weekly active users by July twenty twenty five.
00:14:04: That's unprecedented universal adoption, basically.
00:14:07: But on the other hand, the systems can be fragile.
00:14:10: The rules are still inconsistent, as we discussed.
00:14:13: Leaders like Ma'am Collage stressed that if ethics doesn't proactively lead the development of AI, we risk building systems that just amplify existing bias and ultimately erode it.
00:14:23: with every trust you need for that widespread adoption to actually stick.
00:14:27: So the ultimate question really remains, doesn't it?
00:14:30: How fast can organizational leadership and international governance for that matter move to keep functional and ethical pace with this exponential growth in agent capability and this massive widespread user adoption that's happening simultaneously?
00:14:45: That is the question.
00:14:46: Well, if you enjoyed this deep dive, new additions drop every two weeks.
00:14:50: Also check out our other additions on ICT and tech, digital products and services, cloud, sustainability and green ICT, defense tech and health tech.
00:14:59: Thank you for listening and remember to subscribe.
New comment