Best of LinkedIn: Artificial Intelligence CW 39/ 40
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways
This edition provides a broad overview of the current state and future challenges of Artificial Intelligence (AI), with a strong emphasis on governance, strategy, and the rise of AI agents. Several entries focus on AI regulation and compliance, particularly the EU AI Act, stressing its impact on internal enterprise systems and the non-negotiable requirement for human oversight and accountability to prevent costly errors and bias. A significant portion of the text discusses the adoption and architecture of AI agents, debating whether they are game-changing autonomous partners or simply overhyped automation, with experts recommending specialized, collaborative agents over monolithic designs. Finally, many authors address AI strategy and business transformation, warning against chasing technology without clear business value, underscoring that cultural readiness, data quality, and joint business-IT ownership are essential for successful AI scaling.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This episode is provided by Thomas Allgaier and Franus, based on the most relevant LinkedIn posts about artificial intelligence in calendar weeks, thirty nine and forty.
00:00:09: Franus supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts and competitor strategies.
00:00:19: So product teams and strategy leaders don't just react, but shape the future of AI.
00:00:25: Welcome back to the deep dive.
00:00:26: Today we're tackling a pretty critical mission actually.
00:00:29: We're distilling the absolute top artificial intelligence trends that really drove the discussion for ICT and tech leaders on LinkedIn these past couple of weeks.
00:00:38: That's calendar week, thirty nine and forty.
00:00:40: Yeah.
00:00:40: You know, for anyone trying to navigate this space, the pace isn't just fast anymore.
00:00:43: It feels like it's demanding totally new governance and fundamentally changing enterprise strategy.
00:00:49: It really is.
00:00:50: And the sources we looked at, they absolutely confirm we've shifted from, you know, academic talk to operational reality.
00:00:55: It's a forced maturity is happening.
00:00:57: Forced
00:00:57: maturity.
00:00:58: I like that.
00:00:59: Yeah.
00:00:59: So we've boiled down the content into, I think, three critical areas for leaders.
00:01:04: First, there's this huge pressure from global governance and risk management.
00:01:09: We really need to untack what the EU AI Act means for, well, pretty much everyone.
00:01:14: Second, the intense hype versus the architectural reality of these agentic systems.
00:01:19: Lots to talk about there.
00:01:21: And finally, how company culture and strategy basically decide who lands in that AI winners club and who doesn't.
00:01:27: OK, let's kick off with the governance then because that regulatory stuff it's not just coming it's it's here isn't?
00:01:33: it
00:01:33: is?
00:01:34: when the EU AI Act first came out I think a lot of people maybe outside of the big model builders figured it wasn't really their problem.
00:01:41: That's exactly the assumption the sources show is just dangerously wrong.
00:01:45: Benjamin King was really hammering this point.
00:01:46: You have to understand their definition of a deployer.
00:01:48: Okay,
00:01:49: we deploy
00:01:49: you.
00:01:49: Right.
00:01:50: The EU AI Act directly targets any enterprise using AI internally for critical functions.
00:01:55: So think HR tools, supply chain optimization, even managing utilities.
00:02:00: So
00:02:00: everyday stuff almost.
00:02:01: Exactly.
00:02:02: If a common tool, maybe, A predictive maintenance AI gets classified as high risk because, you know, if it fails, it could cause safety issues.
00:02:11: Well, the company using it is liable.
00:02:13: And the liability, that's the real kicker here, isn't it?
00:02:16: These aren't small fines.
00:02:17: No, they're massive.
00:02:18: King specifically pointed out the maximum penalties are staggering up to thirty five million or seven percent of global turnover, whichever is higher.
00:02:27: Wow.
00:02:27: OK, that definitely elevates AI governance beyond just an IT issue.
00:02:31: Absolutely.
00:02:32: It's core risk management now.
00:02:33: So to get started, he suggests this non-negotiable three step approach.
00:02:37: First, inventory and classify all your internal AI systems.
00:02:41: Know what you have.
00:02:42: Makes sense.
00:02:43: Second, establish governance really focusing on like eliminating bias data.
00:02:48: And third, implement human oversight.
00:02:50: Okay, but that human oversight piece, it raises a huge red flag for me around accountability.
00:02:56: We saw that stark example with the delayed case study that was mentioned.
00:02:59: Is human oversight really this, you know, the silver bullet regulators think it is?
00:03:02: That's probably the biggest point of friction right now.
00:03:06: Both Jonas Christensen and Patrizio Bertini warned leaders about what they call AI slop.
00:03:11: AI slop?
00:03:12: Yeah.
00:03:13: where the AI spits something out so fast and confidently that humans just sort of outsource their own judgment.
00:03:20: And that leads to costly errors or roadstressed almost immediately.
00:03:24: Like that Deloitte example, fake citations in a government report, that wasn't just the AI failing, was it?
00:03:30: No, that was a textbook failure of human judgment layered on top.
00:03:34: And Harpreet Suri, who's CTO at a global law firm, really reinforced this.
00:03:38: She said accountability and auditability have to be non-negotiable, especially in regulated sectors like legal tech.
00:03:45: But okay, if we mandate human review, are we sure we're not just like relying on flawed ideas about how humans actually behave?
00:03:52: That's the exact nuance Louisa Drovsky brought up, drawing from a European data protection supervisor report.
00:03:58: She pointed out these problematic assumptions we make, like assuming automation doesn't influence human judgment.
00:04:03: Which
00:04:03: it clearly does.
00:04:04: Right.
00:04:05: or that just combining human and machine input automatically makes things better.
00:04:09: The reality, she argues, is that mandated oversight isn't some magic wand.
00:04:14: It means real training, careful workflow design, otherwise people just burn out or click accept without thinking.
00:04:21: So
00:04:21: okay, the pressure's high.
00:04:22: Legal and procurement teams must be scrambling for guardrails.
00:04:26: What practical advice did we see on the procurement side?
00:04:30: Martin B laid out seven steps for... Bulletproof AI procurement, as he called it, two really jumped out.
00:04:37: First, he stressed making private AI the default option.
00:04:40: Private AI as default, meaning...
00:04:42: Meaning demanding dedicated cloud environments or, you know, VPC options.
00:04:47: Basically, keep your proprietary data contained.
00:04:49: Don't just feed it into some external multi-tenant model where you lose control.
00:04:52: Okay, critical security boundary, got it.
00:04:54: What was the other key step?
00:04:55: It's
00:04:56: about rethinking contracts.
00:04:57: You can't really use traditional binary service level agreements, SLAs for systems like... generative AI that are probabilistic.
00:05:04: They don't just work or not work.
00:05:06: Right.
00:05:06: It's about likelihoods.
00:05:07: Exactly.
00:05:08: So Martin B emphasized contracting for probabilistic SLAs or PSLAs.
00:05:13: You need confidence targets.
00:05:15: Like the model will be accurate.
00:05:17: Ninety five percent of the time.
00:05:18: Not just a simple yes no guarantee.
00:05:21: That's a pretty fundamental shift in how you buy enterprise software.
00:05:24: It really sounds like the guardrails are going up on existing AI.
00:05:28: But how are we handling the next wave?
00:05:30: It feels like there's this weird tension.
00:05:32: Everyone's terrified of compliance, but simultaneously pouring all this hype into AI agents.
00:05:37: That captures the market perfectly.
00:05:39: Yeah.
00:05:39: The agent debate is absolutely raging.
00:05:41: You've got skeptics on one side, like Sebastian Hewing and Eduardo Ordex.
00:05:46: They argue a lot of these agents are just glorified IFN statements.
00:05:50: Just fancy scripts.
00:05:52: Pretty much.
00:05:53: Ordex pointed out the failure rate for AI projects is still stubbornly high, around eighty percent, whether you call them agents or not.
00:05:58: so the warning is don't add complexity with a reasoning agent if simple cheap automation actually does the job better.
00:06:05: That's a fair warning.
00:06:07: But what is the difference then between a glorified script and what some people call an intelligent partner?
00:06:12: Yeah.
00:06:12: How should leaders think about that?
00:06:13: Mm-hmm.
00:06:14: Nandan Molakara offered a pretty solid framework here.
00:06:17: He defines an advanced agent as one that can intelligently plan its actions, coordinate different systems, learn from feedback, and crucially, adapt its responses.
00:06:27: Plan, coordinate, learn, adapt.
00:06:29: OK.
00:06:29: Yeah.
00:06:30: And Tony Moroni, referencing some PIP-EMG research, even broke them down into four types for the enterprise.
00:06:35: Taskers, automators, collaborators and orchestrators.
00:06:39: They're really designed for multi-step autonomous work.
00:06:42: Okay, if they're that complex and autonomous, designing them for enterprise scale must be a beast.
00:06:47: How do you avoid just creating spaghetti architecture?
00:06:50: Yeah, that's where architecture becomes really strategic.
00:06:53: Philip Paraguaya gives some great advice, explicitly avoid the God-object anti-pattern.
00:06:59: The God-objects.
00:06:59: Yeah, that single monolithic agent trying to do everything, search, write code, make coffee, you name it.
00:07:05: He says the smarter, more resilient approach is building specialized collaborative agents, like a fellowship, he called
00:07:11: it.
00:07:11: A fellowship of specialized agents.
00:07:13: I like that metaphor.
00:07:14: Does that actually work in practice?
00:07:16: Are companies doing this?
00:07:18: We saw a fantastic case study about Walmart's approach summarized by Gustavo Valguana.
00:07:23: He called it a masterclass in scaling innovation with purpose.
00:07:27: They design specific super agents for specific roles, one for customers, one for associates, partners, developers.
00:07:36: So role-based.
00:07:37: Exactly.
00:07:38: Role awareness ensures they integrate cleanly.
00:07:40: And this whole shift is powering what Michael Backsbogard termed agente commerce, moving B to B. e-commerce beyond just personalization to actual autonomous execution like procurement agents acting directly for the business.
00:07:54: And the tooling is evolving super fast too, right?
00:07:57: Gianni Beggiato mentioned OpenAI's agent kit.
00:08:00: How might that change things?
00:08:01: Benjiato thinks Agent Kit could seriously shake up the existing automation platforms.
00:08:05: Its big advantage is default distribution.
00:08:07: It ships within the huge open AI ecosystem.
00:08:10: So easier adoption.
00:08:11: Much easier.
00:08:12: Companies might skip lengthy external vendor approvals.
00:08:15: Plus, it apparently has native governance features like audit logs built in, which could really speed up deployment.
00:08:20: Okay,
00:08:20: speed sounds good, but autonomy plus speed plus money.
00:08:25: That sounds risky.
00:08:26: Bora girl raised an alarm about the AI goes shopping problem.
00:08:30: a very necessary alarm.
00:08:31: Yeah Girl highlighted that with things like open AI is a gentent commerce protocol or ACP integrating with payment providers like Stripe and Shopify.
00:08:40: We're essentially building autonomous financial transaction capabilities before we've really nailed autonomous security yikes.
00:08:47: Yeah, the danger is AI's potential for, let's say, impulsive tendencies being exploited.
00:08:54: So robust security, financial guardrails, those have to be absolutely locked down before we let autonomous agents lose with the company credit card or even our own.
00:09:02: Okay,
00:09:03: let's shift gears from the tech to the strategy side.
00:09:05: There seems to be this frankly embarrassing gap between how excited leaders say they are about AI and the actual measurable business value they're getting.
00:09:13: Oh, absolutely.
00:09:13: The BCG research, which Warren Greenberg summarized nicely, put a pretty brutal number on it.
00:09:18: He talks about the five percent.
00:09:19: The
00:09:19: five percent club?
00:09:20: Yeah, apparently only about five percent of companies are generating real, measurable value from AI.
00:09:27: And those companies, they see something like three point six times better shareholder returns.
00:09:32: The other ninety five percent are kind of stuck in pilot purgatory, not getting much ROI.
00:09:36: Wow, only five percent.
00:09:37: What's the single biggest thing separating that five percent from everyone else?
00:09:41: Joint business IT ownership, it came through really clearly in the research.
00:09:45: If the AI strategy is just driven by the IT department alone, it's, well, highly likely to fail.
00:09:51: So the business leaders have to own it?
00:09:52: They have to own the roadmap and the outcomes, which leads straight to the core issue Greenberg identified.
00:09:57: Seventy percent of AI failures aren't actually about the tech.
00:09:59: They're about people or structure, broken processes, only about twenty percent are actual technical problems.
00:10:05: Okay, so if the tech itself is mostly fine, Leaders need a better way to define success than just chasing technical metrics like, I don't know, F-one scores.
00:10:14: Exactly that.
00:10:15: Then Torben Nielsen.
00:10:16: Drawing on HBR takeaways, really stressed, starting with strategy.
00:10:20: Don't let AI drive your strategy.
00:10:22: Use AI to support your existing core objectives.
00:10:26: He said, leaders need to ask, are we doing this out of FOMO or to genuinely achieve a leap in value?
00:10:31: Good question.
00:10:32: And Kassi Kozarkov echoed this.
00:10:34: She emphasizes that real AI leaders prioritize clarity, framing the decisions correctly, focusing on usefulness, not just chasing easy to measure metrics.
00:10:44: that might represent, as she put it,
00:10:47: That idea of just adopting tech without clarity, it sounds like a fast track to what Martin Mohler called AI dead.
00:10:53: What's that about?
00:10:54: AIDUT is basically when organizations slap shiny new AI tools onto already broken convoluted processes.
00:11:01: Instead of fixing the underlying mess, they just automate the
00:11:04: chaos.
00:11:04: Which makes things worse.
00:11:05: It compounds the dysfunction, and it fuels digital exhaustion, which Mahler noted has climbed up to eighty-four percent.
00:11:11: Real transformation, the successful kind, needs that hard step of redesigning workflows first.
00:11:16: Helen Orgus saw similar readiness gaps, particularly in German companies she observed.
00:11:21: So success really hinges on getting to Foundational, maybe less glamorous things, right?
00:11:26: Data quality and rethinking the role of engineers.
00:11:29: Absolutely fundamental.
00:11:30: The old garbage and garbage out principle, Sven Ruse reminded everyone, is more critical than ever with AI.
00:11:37: The output quality is entirely dependent on the input data quality.
00:11:41: That includes needing current high quality media content for reliable info.
00:11:46: Right,
00:11:46: and the engineers.
00:11:47: Sendit Mehta proposed a really interesting idea.
00:11:50: Maybe we need to totally rethink coding interviews.
00:11:53: If AI is writing a lot of the basic code now, should we still test pure syntax recall?
00:11:58: Good point.
00:11:59: Or should we test how well candidates can guide the AI, validate its output, solve real problems using it?
00:12:05: The skills needed are clearly shifting pretty dramatically.
00:12:08: Okay,
00:12:08: let's zoom out a bit to the broader market impact.
00:12:11: Yeah.
00:12:11: If every company starts using the same powerful AI tools for efficiency, is there a risk that they all start looking?
00:12:18: Well, the same.
00:12:19: Does it kill strategic distinctiveness?
00:12:21: That's the really sharp, thought-provoking question Richard Foster Fletcher posed.
00:12:26: He suggested Gen AI could act like a kind of ghost consultant embedded in everyone's workflows.
00:12:31: A ghost consultant?
00:12:33: Yeah,
00:12:33: suddenly pushing all the outputs towards the average, towards consensus thinking, which could potentially narrow the range of truly original ideas or strategies.
00:12:42: So protecting your unique edge might mean actively resisting that homogenizing pull.
00:12:47: So
00:12:48: how does a brand stand out if everyone's got the same ghost consultant whispering suggestions.
00:12:52: Daniel
00:12:52: Paul argued pretty strongly that founder-led branding, that authentic storytelling, still wins out over generic AI algorithms.
00:13:00: The key is to tune the AI to amplify your unique voice, your founder's perspective, not replace it.
00:13:06: That authentic distinction is what builds loyalty.
00:13:08: And that distinctiveness is crucial, especially now that AI is fundamentally changing how search works, how brands get found.
00:13:15: Which brings us neatly to Ramakrishna S's concept, generative AI engine optimization.
00:13:20: or GEO.
00:13:21: It's the evolution of SEO.
00:13:22: GEO, OK.
00:13:24: Why the new acronym?
00:13:25: Because nearly sixty percent of Google searches are apparently already zero-click.
00:13:30: People get the answer right on the results page.
00:13:32: And AI overviews are just accelerating that trend.
00:13:35: So traditional SEO isn't enough anymore.
00:13:38: So you need to be the answer.
00:13:39: Exactly.
00:13:40: For a brand to survive and thrive, it needs to be featured directly in the AI's generated answer.
00:13:45: GEO is about achieving that.
00:13:47: And how do you do that?
00:13:48: It means focusing on what he calls entity visibility.
00:13:51: Your content needs to be super clear, concise, structured for natural language.
00:13:56: But crucially, you have to demonstrate, eat.
00:13:58: Eat.
00:13:59: Experience, expertise, authoritativeness, and trustworthiness.
00:14:03: You prove it through things like expert references, citations, demonstrating real legitimacy.
00:14:09: It's way beyond just optimizing keywords now.
00:14:11: That shift towards proving deep legitimacy feels like a good place to land.
00:14:16: How should leaders kind of philosophically frame AI within their organizations to avoid either panic or maybe overhyped expectations.
00:14:24: Dr.
00:14:24: Hans Rousenek, referencing some work at a Princeton, had a really useful perspective.
00:14:28: He suggests just framing AI as a normal technology, you know, like electricity or the internet.
00:14:33: Not some mystical force.
00:14:34: Exactly.
00:14:34: Treat it normally.
00:14:35: It helps dial down the exaggerated fears and the unrealistic hype.
00:14:39: When companies approach it practically, they focus less on, you know, the sci-fi stuff and more on the real work.
00:14:45: workflow redesign, figuring out liability, disciplined application.
00:14:49: Okay, so if I had to pull out the single biggest lesson from the source material these past two weeks, weeks third, nine and forty, it's that AI success right now.
00:14:57: It's defined less by the raw tech capability and much more by the human systems around it.
00:15:02: Governance, culture, discipline, strategy.
00:15:04: That nails it, I think.
00:15:06: Which brings us to a final thought for you, the listener.
00:15:08: We know Gartner predicts both generative AI and these autonomous AI agents are likely headed for the trough of disillusionment within the next, say, two to five years.
00:15:18: So the question is, are you, right now, investing in those foundational people and process capabilities, things like AI literacy, model ups that will actually let you climb the slope of enlightenment quickly once the current hype inevitably cools down?
00:15:33: Something to think about.
00:15:35: If you enjoyed this episode, new episodes drop every two weeks.
00:15:38: Also check out our other editions on ICT and tech, digital products and services, cloud sustainability in green ICT, defense tech and health
00:15:46: tech.
00:15:47: Thanks for tuning in to the deep dive.
00:15:49: If you're a strategy or product leader trying to cut through the noise and find actionable intelligence, definitely subscribe so you don't miss our next analysis of the week's most critical tech insights.
New comment