Best of LinkedIn: Artificial Intelligence CW 41/ 42
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways
This edition offers a broad overview of the rapidly maturing landscape of Artificial Intelligence, focusing heavily on the necessary shift from experimental pilots to industrial-grade, scalable production. Several authors stress that achieving measurable business outcomes requires a strategic, engineering-focused mindset rather than relying on generic, off-the-shelf tools, with one source noting a high failure rate for AI projects. A critical theme is AI infrastructure and sovereignty, as massive global investments are funneled into data centres and specialised hardware, with providers like Oracle and AMD challenging Nvidia's dominance while nations prioritize local control over data and models. Furthermore, numerous discussions highlight the urgency of governance, ethics, and security, advocating for integrated cybersecurity, transparent model explainability, and the importance of addressing cultural biases and the potential for new data monopolies. Finally, the sources acknowledge the rise of Agentic AI, which is transforming workflows and requiring leaders to focus on human readiness and building robust systems that amplify, rather than replace, human expertise.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This episode is provided by Thomas Allgaier and Frenes based on the most relevant LinkedIn posts about artificial intelligence in calendar weeks, forty one and forty two.
00:00:09: Frenes supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts and competitor strategies.
00:00:18: So product teams and strategy leaders don't just react, but shape the future of AI.
00:00:24: Welcome back.
00:00:26: So today we're really going to dig into the top artificial intelligence trends we've been seeing across professional insights from the last couple of weeks.
00:00:34: Right.
00:00:34: The mission really is to figure out where things are shifting, you know, moving past the hype, the pilots towards disciplined, measurable execution, real enterprise stuff.
00:00:43: Exactly.
00:00:44: Execution, that's the word.
00:00:45: Yeah.
00:00:45: The conversation feels like it's grown up a bit.
00:00:48: We've kind of clustered the insights for you today into three main areas.
00:00:51: Okay.
00:00:52: We'll kick off with the foundation infrastructure, the chip wars, up and how these new strategic motes are being dug.
00:01:00: Then we'll jump into the application layer, particularly a genetic AI and this whole workflow revolution people are talking about.
00:01:06: Yeah, that's a big one.
00:01:07: And finally, we have to tackle the non-technical side, which is often the hardest part.
00:01:12: Culture, risk, and how to adopt this responsibly.
00:01:16: Good plan.
00:01:17: So let's unpack that foundation first.
00:01:19: What's really fascinating, I think, is the high stakes game being played around compute power.
00:01:24: Definitely.
00:01:24: The whole idea of an AI moat is absolutely shifting.
00:01:28: It's not just about having the best model anymore.
00:01:30: It's becoming about who owns the entire pipeline.
00:01:34: The whole stack.
00:01:34: The whole vertical compute pipeline, yeah.
00:01:37: From the silicon right up to the user.
00:01:38: And maybe the biggest sign of this huge shift is seeing NVIDIA's grip on the GPU market starts to loosen a bit.
00:01:46: Right.
00:01:46: The standard for AI infrastructure for so long.
00:01:49: Exactly.
00:01:50: And Nehanth Maidu-Kalasetti pointed this out.
00:01:52: Oracle confirmed they're deploying what?
00:01:54: Fifty thousand AMD AI chips?
00:01:56: The MI-Four fifties?
00:01:57: Starting mid-twenty-twenty-six.
00:01:59: Fifty thousand?
00:02:01: That is a, well, a staggering number.
00:02:04: And it's not just about having a second source, right?
00:02:06: This looks like a really major alliance taking shape.
00:02:09: Oracle, AMD.
00:02:10: And
00:02:10: OpenAI.
00:02:11: They're making a huge bet that owning and optimizing these specific compute pipelines, well, that's a new oil field, basically.
00:02:19: It
00:02:19: feels like it.
00:02:20: I mean, that Oracle Cloud deal with OpenAI alone, potentially worth up to three hundred billion dollars.
00:02:25: Three
00:02:25: hundred billion, yeah.
00:02:27: That valuation tells you everything.
00:02:30: about the race cloud providers are in to lockdown supply and, you know, integrate hardware design right into their services.
00:02:36: It really is a full stack war now.
00:02:38: Makes it much tougher for newcomers trying to compete just on the model itself.
00:02:41: But,
00:02:41: okay, even if you figure out the chip supply, you immediately bump up against some pretty fundamental limits.
00:02:46: Walter Zuzavi highlighted this for major economies.
00:02:49: Like
00:02:49: the physical barriers.
00:02:50: Exactly.
00:02:51: In places like Germany, for instance, AI ambition is hitting a, well, a very hard wall.
00:02:56: Just basic data center power and capacity and
00:02:58: I saw those numbers the planned it load for twenty thirty something like three point seven gigawatts.
00:03:04: Yeah, but the projected demand could be closer to like, five point nine gigawatts.
00:03:09: Wow.
00:03:09: That gap was at two point two gigawatts.
00:03:11: That's a massive shortfall.
00:03:13: It definitely shifts capacity planning from just operations to a serious strategic risk at the board level.
00:03:20: Absolutely.
00:03:20: You can't really chase ambitious AI strategies, especially regulated ones, or build out sovereign clouds if you literally can't guarantee the power to run the hardware.
00:03:30: And that leads right into the next bottleneck, doesn't it?
00:03:33: connectivity.
00:03:34: It does.
00:03:35: Alan Weckel noted that these huge AI networks often stumble not because the GPUs aren't fast enough, but because of the underlying network fabric.
00:03:42: Ah, the fabric.
00:03:43: So for people maybe less deep in networking, we're talking the high-speed connections linking all those thousands of GPUs.
00:03:50: Precisely.
00:03:51: Option non-blocking Ethernet designs.
00:03:53: If that fabric isn't absolutely perfect, the second you try to distribute a really large training job, performance just tanks.
00:03:59: Right.
00:03:59: It grinds to a halt.
00:04:00: Yeah.
00:04:01: The fabric is like the central nervous system.
00:04:04: It has to handle enormous amounts of data, all moving in parallel, no collisions.
00:04:09: These non-blocking designs are basically non-negotiable now for the kind of scale you need in, say, Neo or sovereign clouds.
00:04:16: Thanks, us.
00:04:16: And Rick Moore mentioned solutions like Lumen's Wavelength Rapid Routes offering, you know, four hundred GBPs dedicated links.
00:04:23: They're specifically built to kill those bottlenecks, especially when your architecture is spread out geographically.
00:04:28: Okay, so... That focus on connectivity, performance at scale, it ties directly into what Armand Ruiz observed, doesn't it?
00:04:37: This shift in industry focus away from the super expensive training phase.
00:04:41: Torch
00:04:41: inference, yeah.
00:04:42: Inference.
00:04:43: Model deployment, execution.
00:04:45: That's what dominates the long-term AI workload.
00:04:47: Training is that big upfront cost, sure, but... inference is the ongoing operational cost.
00:04:52: So the drive for efficiency there, especially at the edge where the data often is, that's where the real ROI is going to be.
00:04:58: That's
00:04:58: the thinking.
00:04:59: It's all about efficient execution for lasting value.
00:05:02: I do wonder though, if the focus shifts so hard to inference efficiency, does that mean training budgets actually shrink eventually?
00:05:10: Or does the complexity just keep pushing the training costs up anyway?
00:05:13: That's the million dollar question, isn't it?
00:05:14: Or maybe the billion dollar question now.
00:05:16: But the immediate goal is definitely ROI, getting value out of these things.
00:05:20: And that sets us up perfectly for our next theme, the agentic enterprise.
00:05:24: Right.
00:05:24: Because that need for efficient inference is exactly what fuels the drive for these more autonomous systems, systems that can actually handle multi-step workflows.
00:05:34: We are definitely seeing that shift.
00:05:36: Moving beyond just static tools to systems that can, you know, reason, plan, act on their own, Tony Moroni laid out four ways these agents could unlock serious enterprise value.
00:05:47: Okay, like what?
00:05:48: Well, expanding automation, obviously.
00:05:50: Having systems available to it over for four or seven.
00:05:52: building more organizational resilience, and, maybe most importantly, turning all the dormant knowledge companies have directly into action.
00:06:01: Verifiable
00:06:01: action.
00:06:02: And to do that, you need the right technical underpinning.
00:06:05: That's why Eduardo Ordex called agentic rag plus memory the missing link.
00:06:10: OK, so rag, retrieval augmented generation, most people know that.
00:06:13: It's how you ground the model in your own documents, right?
00:06:15: Yeah.
00:06:15: Stop from just making things.
00:06:16: Exactly.
00:06:17: Stops the hallucinations or tries to.
00:06:19: But agentic rag plus memory adds three really key things on top.
00:06:25: An orchestrator, specific memory functions, and these reasoning loops, like react.
00:06:30: OK.
00:06:31: which lets the system actually plan dynamically.
00:06:34: Not just grab a chunk of text, but break down a complex task, search iteratively, synthesize info over multiple steps.
00:06:40: You know, solve real messy enterprise problems.
00:06:43: So instead of just asking what was Q three revenue and getting a number, you could analyze Q three revenue against Q four cost projections and suggest three ways to cut costs.
00:06:54: Something that needs planning.
00:06:56: That's the idea.
00:06:56: That's the practical leap forward.
00:06:58: And we're seeing this pop up in actual commerce.
00:07:00: Hugo Caterino mentioned how in agentic search, these assistants are trying to unify discovery, trust, and even settlement.
00:07:06: So search relevance suddenly has to include checking price, stock, whether the payment method works.
00:07:13: because if the agent suggests buying it, it better be immediately doable.
00:07:17: That really raises the stakes, doesn't it?
00:07:19: The liability and the expectations on the AI in a transaction?
00:07:22: It absolutely does, but it also creates opportunities.
00:07:25: Anna Lerner-Nesbitt was talking about the Walmart and open AI shopping experience.
00:07:30: Oh, yes,
00:07:30: all
00:07:30: that.
00:07:31: And she raised a really interesting point.
00:07:33: Agents could be a huge opportunity to bake in sustainable choices from the start.
00:07:38: How so?
00:07:39: Well, they could act like automated choice architects, right?
00:07:43: Prioritizing efficiency, maybe suggesting lower waste options, guiding users towards more sustainable choices across the value chain, potentially without needing constant human intervention.
00:07:54: That's a powerful concept.
00:07:55: Using automation to nudge behavior towards positive outcomes.
00:07:59: But we have to get real about deployment.
00:08:01: It's proving really tough.
00:08:02: Oh, absolutely.
00:08:04: Virtrungland.DA cited an HBR report pointing out that most companies are really struggling to scale AI beyond the pussies
00:08:10: stage.
00:08:10: Why is that?
00:08:11: Performance drops, unpredictable costs, and big security gaps are the main culprits.
00:08:16: Okay, so what's the fix?
00:08:17: The emerging consensus seems to be around something called generative operations, or GenOps.
00:08:22: Think MLO's rigor, but specifically tailored for these generative models.
00:08:27: So a structured way to manage deployment.
00:08:29: Exactly.
00:08:30: Automated monitoring, drift detection, cost analysis, governance checks baked in before, during, and after deployment.
00:08:36: Without that kind of operational discipline, these agents just won't scale responsibly or maybe even safely.
00:08:42: Which brings us right to our last theme.
00:08:44: Culture, risk, and responsible adoption.
00:08:47: Yeah.
00:08:48: Because the best infrastructure and the smartest agents, well, they're kind of useless if the organization isn't ready.
00:08:54: That really is the heart of it, isn't it?
00:08:56: The biggest hurdle often isn't technical at all.
00:08:58: There is an IBM survey of execs.
00:09:01: Yeah.
00:09:01: Sixty-seven percent expect agents to act independently by twenty-twenty-seven.
00:09:05: Huge number.
00:09:07: But the main barrier they see, not the tech, it's
00:09:10: people,
00:09:11: mindset, organizational readiness.
00:09:13: It always comes back to the people.
00:09:15: Pretty, WaterD made that point really clearly.
00:09:18: Sustainable AI impact comes from prepared people, not just powerful platforms.
00:09:22: Right.
00:09:23: And the stats back it up.
00:09:24: Something like seventy-eight percent of global orgs are using AI now, but fewer than one percent are actually considered fully mature in how they use it.
00:09:32: And you see that maturity gap reflected nationally too.
00:09:35: Donor Mustafa highlighted the situation in Sweden pretty striking.
00:09:39: Seventy-seven percent of Swedes using AI at work have had zero formal education or training on it.
00:09:46: seventy seven percent.
00:09:47: wow yeah that's.
00:09:49: that's not an employee problem that's a leadership failure.
00:09:51: surely absolutely.
00:09:52: when adoption happens bottom up like that.
00:09:54: and leadership doesn't provide guardrails doesn't provide training that's a massive risk
00:09:59: which ties into navine balani's reminder that ai really needs to be treated as an engineering discipline not just you know a bunch of cool experiments
00:10:07: totally.
00:10:07: The differentiator isn't just having the latest LLM or a co-pilot.
00:10:11: It's the engineering mindset behind how you deploy, manage, and integrate these tools into critical workflows.
00:10:17: Rigor matters.
00:10:18: And that rigor needs to extend down to the end user too.
00:10:22: Daniel Paul has some really practical advice on getting more out of existing models, like GPT-V.
00:10:27: Yeah.
00:10:28: Using its different modes properly.
00:10:30: like instant mode for quick ideas, thinking mode for strategy, pro mode for deep, precise work, auto mode, if you're not sure, moving beyond just tossing random prompts at it.
00:10:40: That
00:10:40: makes sense.
00:10:41: Getting intentional about how you interact with it, because that attention to detail is critical, especially when you factor in the ethical stuff and the risks involved.
00:10:48: Definitely.
00:10:48: And one of the most sobering things we saw came from Dr.
00:10:51: Hans Ruznick citing that Harvard study.
00:10:53: The
00:10:54: weird bias one.
00:10:55: Exactly.
00:10:55: Showing how LLMs are heavily biased towards Western educated industrial rich and democratic perspectives.
00:11:02: That's a fundamental challenge.
00:11:03: If the training data is skewed towards one narrow cultural viewpoint, how fair or applicable are these models globally?
00:11:11: It raises huge questions about governance, trust, especially for decisions impacting devotes groups.
00:11:17: For sure.
00:11:17: And even when the output sounds right, Kassi Kazarkov warns about the fluency heuristic.
00:11:23: Ah, yeah, the tendency to trust something just because it sounds smooth and confident.
00:11:26: Exactly.
00:11:27: It's a dangerous psychological trap.
00:11:29: The AI sounds so sure of itself that humans might skip the crucial step of actually verifying the output.
00:11:36: Which can lead to really costly mistakes, right?
00:11:38: Because the model will hallucinate or make subtle errors eventually.
00:11:41: You almost have to default to assuming it's wrong until you prove it right.
00:11:45: And the final piece of the risk puzzle.
00:11:47: Security.
00:11:48: with these autonomous agents.
00:11:50: Hussaini Savawala pointed out that AI coding agents, well, they're not just speeding up development.
00:11:56: They're speeding up vulnerability creation, too.
00:11:58: Unfortunately, yeah.
00:11:59: Replicating security flaws at enterprise scale incredibly quickly.
00:12:04: So what's the answer there?
00:12:06: It seems to be what they call data-first agentic security.
00:12:09: Basically, build security in from the start.
00:12:12: prevent sensitive data from even getting into the model's context if possible.
00:12:16: Don't try to just patch things later.
00:12:18: Makes sense.
00:12:19: Security as a foundation, not an afterthought.
00:12:22: So pulling it all together from the insights these past couple of weeks.
00:12:25: Yeah.
00:12:26: What's the big picture?
00:12:28: It really feels like the market's moving past the initial hype, getting down to the foundations.
00:12:32: Definitely maturing.
00:12:34: Yeah.
00:12:34: And that demands serious strategic investment in infrastructure, whether it's new chip deals or perfecting that network fabric.
00:12:42: But maybe even more crucial is the shift needed in company culture and engineering discipline to actually manage these powerful new agentic systems properly.
00:12:51: And Kira Dotson put it pretty bluntly.
00:12:53: Yeah.
00:12:54: The era of AI for the sake of AI is over.
00:12:57: Right.
00:12:57: So the question every organization needs to ask itself now is, are we still chasing vanity metrics just looking busy with AI?
00:13:04: Or are we truly focused on delivering measurable enterprise grade value that's going to last beyond the current hype cycle and actually reshape how we operate?
00:13:13: That really is the key differentiator now.
00:13:15: Focus on real value.
00:13:16: Exactly.
00:13:17: Are you building something that endures or just riding the wave?
00:13:20: Well said.
00:13:21: If you enjoyed this deep dive, new episodes drop every two weeks.
00:13:25: Also check out our other editions on ICT and tech.
00:13:28: Digital products and services, cloud, sustainability in green ICT, defense tech and health tech.
00:13:34: Thanks for tuning in.
00:13:35: We really encourage you to subscribe so you don't miss our continued deep dives and to the insights that matter most for your strategy.
New comment