Best of LinkedIn: Artificial Intelligence CW 43/ 44
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways
This edition provides a comprehensive overview of the accelerating global landscape of Artificial Intelligence, focusing heavily on governance, innovation, and strategic deployment. A significant theme is the EU AI Act, which is repeatedly highlighted as setting a new, extraterritorial standard for compliance, risk classification, and ethical AI governance, impacting businesses worldwide and creating a "compliance crisis" and a "moat" for those who adapt early. Concurrently, several sources discuss major infrastructure investments globally, particularly in the Middle East (UAE and Saudi Arabia) and Europe (Germany), emphasizing the push for Sovereign AI capabilities, large-scale AI data centers, and advanced compute power through partnerships like Cisco/NVIDIA and Telekom/SAP. Finally, the sources explore the practical and philosophical challenges of AI, including mitigating algorithmic bias, improving AI creativity through methods like Verbalized Sampling, addressing security risks from AI-generated code and prompt engineering, and debating the ultimate impact of AI on human labor, purpose, and consciousness.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: Welcome back to the deep dive.
00:00:01: This episode is provided by Thomas Hallgeier and Frennis based on the most relevant LinkedIn post about artificial intelligence in calendar weeks, forty three and forty four.
00:00:10: Frennis supports enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts and competitor strategies.
00:00:19: So product teams and strategy leaders don't just react, but shape the future of AI.
00:00:24: So today we're diving into the top.
00:00:27: AI trends we've seen bubbling up on LinkedIn over the last couple of weeks.
00:00:30: And you know, if you look at the conversation, it really feels like we've shifted.
00:00:33: It's less, can we build cool AI stuff?
00:00:36: And much more, okay, how do we actually govern this and scale it responsibly?
00:00:40: Yeah, it's a massive shift, especially for anyone working in ICT and tech.
00:00:44: We've kind of boiled it down to three main things you really need to keep an eye on.
00:00:47: First.
00:00:48: Governance, it's gone global.
00:00:49: It's mandatory now almost like a competitive edge.
00:00:52: Second, this big race for digital sovereignty, huge investments in local infrastructure.
00:00:56: And third, a real change in focus from just automating everything to figuring out how humans and AI can actually work together effectively.
00:01:04: Let's unpack that.
00:01:05: first one, governance, because it really is like the barrier to entry now for scaling properly.
00:01:10: Our sources are pretty clear.
00:01:12: AI compliance isn't something you tack on later.
00:01:14: It has to be, you know, built in from the start.
00:01:16: And the big driver for that globally really, seems to be the EU AI Act.
00:01:21: Stephanie Gradwell pointed out how wide its reach is.
00:01:24: it's hitting UK businesses hard too.
00:01:27: And Wanda R noted it's definitely influencing US companies, especially if their AI touches the European market at all.
00:01:34: Oh,
00:01:34: absolutely.
00:01:35: The financial risk alone is, well, huge.
00:01:37: That's why everyone's suddenly paying attention.
00:01:39: We're talking fines up to seven percent of global turnover.
00:01:42: And for those high-risk systems, they need to be compliant by August, twenty-twenty-six.
00:01:46: That's not a start thinking about its date.
00:01:48: That's the deadline.
00:01:49: Right.
00:01:50: So, okay, deadline looming.
00:01:52: What's the very first thing you do?
00:01:53: Dr.
00:01:53: Till Klein was saying the absolute first step has to be really rigorous system classification.
00:01:57: You've got to figure out exactly what the AI is for and what risk level it carries.
00:02:01: Get that wrong.
00:02:02: and well, everything else is kind of point.
00:02:03: this, right?
00:02:04: Exactly.
00:02:05: And it's complicated because as Jeffrey Sunan and Marie Doha Bessensen-Sano pointed out, it's not just the AI Act.
00:02:12: For this compliance to actually work, to build trust, it needs to mesh perfectly with everything else.
00:02:18: GDPR, obviously the Data Act, the Digital Services Act, it's a whole web.
00:02:22: It sounds incredibly complex.
00:02:24: Yeah.
00:02:24: Especially, you know, when you compare it to what's happening, or maybe not happening as much, in the US.
00:02:29: John Bruton had this really interesting take on the difference.
00:02:32: US funding is way ahead, like a hundred and sixty billion dollars versus Europe's twenty billion dollars.
00:02:36: But the EU is demanding transparency, explainability, traceability, all that auditable stuff.
00:02:42: Yeah, and that difference makes compliance more than just legal paperwork.
00:02:46: It's an engineering problem.
00:02:48: It's an architectural challenge.
00:02:50: John Bruton actually sees building these compliance systems not as a burden, but as a real competitive advantage.
00:02:56: Trust becomes like part of your infrastructure.
00:02:58: And there's a The bottom line reason for that too, Leon Mikolasak shared this IBM study finding it's pretty stunning.
00:03:03: Companies that spend more on AI ethics, specifically the top quartile, are seeing thirty percent higher operating profit from their AI initiatives.
00:03:11: Wow, that really connects the dots, doesn't it?
00:03:13: It shows being responsible isn't just about dodging that massive fine, it's about getting people to actually adopt it faster, getting customers on board, and being able to use AI for important stuff without worrying about a huge backlash.
00:03:26: Governance really drives
00:03:27: ROI.
00:03:28: Okay, so that focus on compliance systems naturally leads to the next big thing.
00:03:33: The actual infrastructure needed to run all this governed AI.
00:03:36: Dr.
00:03:37: Nishita Tiagi-Pachori mentioned how big-text moves here are really shaking up the whole platform ecosystem.
00:03:42: Definitely.
00:03:43: We're seeing this big push away from just relying on, say, U.S.
00:03:47: cloud giants.
00:03:47: It's a race for digital sovereignty, owning your own compute power, and the money involved is just
00:03:53: staggering.
00:03:54: You see that Crystal Clear in Europe.
00:03:55: Dr.
00:03:55: Ferri Ebelassen announced this huge project.
00:03:58: Telecom, NVIDIA, and SAP teaming up.
00:04:01: They're building Europe's biggest AI factory in Munich.
00:04:03: It's a billion-year investment.
00:04:05: They're putting in ten thousand NVIDIA chips.
00:04:07: It's going to have, what, .
00:04:08: five XE flops of power.
00:04:10: Yeah, half an exoflop, which, to put that in perspective, is an insane amount of processing power.
00:04:15: That puts it right up there with some of the world's most powerful supercomputers.
00:04:19: And it's specifically for industrial AI, that industrial AI cloud they're launching, aiming for early twenty twenty-six, is really targeting Europe's manufacturing base.
00:04:28: And meanwhile, the Middle East isn't just sitting back.
00:04:31: Wasim Jarkis highlighted Abu Dhabi putting three point five billion dollars into an AI native city by twenty twenty seven.
00:04:38: Their goal is one hundred percent automation of government processes.
00:04:43: That's incredibly ambitious.
00:04:45: And Sunil Abal really stressed that this isn't just about buying hardware, it's about building their own digital identity.
00:04:50: The Middle East is basically engineering its own AI foundations, the data, the language models, the compute.
00:04:56: Like the Jays model, right, has fifteen percent Arabic training data.
00:04:59: That makes a huge difference for local relevance where global models often fall short.
00:05:03: Right, localization is key.
00:05:04: And all this demand for powerful, secure, local compute is driving partnerships too.
00:05:10: Dave West and Rishi Karada were detailing how Cisco and NVIDIA are working closer together.
00:05:15: They're focusing on secure workloads, new networking gear like those Nexus nine to one hundred switches stuff needed for these specialized neocloud or sovereign cloud setups.
00:05:25: And it looks like businesses are trying not to repeat past mistakes.
00:05:28: Armand Ries mentioned that companies are actively diversifying their AI hardware now.
00:05:32: You know, they learn the hard way about vendor lock-in during the multi-cloud push.
00:05:37: So now with AI being so critical, they want control, flexibility, not just relying on one single provider.
00:05:42: That seems pretty smart.
00:05:43: OK, so we've talked architecture, hardware.
00:05:46: Let's switch to the front end.
00:05:47: How is AI changing how users actually interact with technology?
00:05:51: This brings us to our third cluster.
00:05:52: products, techniques, adoption.
00:05:54: Yeah, and the really big story here potentially is the death of the web page as we know it.
00:05:59: Borogur was talking about chat GPT Atlas from OpenAI.
00:06:03: The idea is it merges search automation conversation.
00:06:07: It basically replaces pointing and clicking.
00:06:09: You just tell it what you need, book me a flight, find hotels under this price, and it figures out the steps and does it.
00:06:15: That sounds incredibly smooth.
00:06:17: that integration.
00:06:18: But Yusuf Ben-Lamoud had an interesting counter argument about, well, defending your product.
00:06:23: He argues companies like Airbnb are actually resisting full integration because their unique interface, their UI is their strategic advantage, their moat.
00:06:31: If you just become an API call, you lose that brand connection, that emotional feel, you become a commodity.
00:06:37: It's a real tension, isn't it?
00:06:38: The UI moat versus the API moat.
00:06:40: Do users ultimately want that distinct brand experience or just pure seamless convenience?
00:06:45: It's a huge strategic question.
00:06:47: And that tension shows up in the tools developers are building too, like making sure AI doesn't just become boringly predictable.
00:06:54: Remy Taihang detailed this technique called verbalized sampling from Stanford and Northeastern researchers.
00:06:59: Right, and it's basically designed to fight our own biases.
00:07:02: As humans, we tend to like things that sound typical or familiar.
00:07:06: That kills creativity in AI open.
00:07:08: So verbalized sampling makes the model sort of talk through different possible answers.
00:07:13: and why, outlining probabilities, it apparently boosts the diversity, the creativity of the output by up to two X without messing up the facts.
00:07:20: That's clever.
00:07:21: And bringing that kind of output back to business value, Thomas Ross shared this playbook for AEO.
00:07:27: answer engine optimization.
00:07:28: It's like the new SEO, but focused entirely on getting your stuff into AI search results to generate qualified leads.
00:07:35: Yeah, you basically create these neat structured answer cards that AI models can easily grab and show users, and success isn't measured by clicks anymore, but by a share of answer, SOA.
00:07:45: How often does your answer get picked as the best one?
00:07:47: Okay, so finally, let's tackle the fourth cluster.
00:07:50: Enterprise risk and this huge cultural challenge of actually getting humans and AI to work together effectively.
00:07:57: That kindrel report, Kai Grinwitz mentioned, it found sixty-two percent of companies are still just playing around stuck in the experimental face.
00:08:05: And Gerry Barrow argues pretty convincingly that for Europe, the problem isn't the tech.
00:08:10: It's readiness.
00:08:12: It's execution.
00:08:13: It's the
00:08:13: people problem, essentially.
00:08:14: Exactly.
00:08:15: Nandan Mulakara drew a great parallel to the early days of RPA.
00:08:20: robotic process automation.
00:08:21: AI agents face the same hurdles.
00:08:23: Success comes not from trying to replace everyone, but designing for collaboration, changing the mindset.
00:08:29: If your only metric is people fired, you miss most of the value.
00:08:32: You need positive metrics, like ours saved.
00:08:34: That builds trust.
00:08:35: And trust with AI is tricky psychologically.
00:08:38: Terry Mooneer warned about this tendency to overhumanize AI.
00:08:41: You hear a smooth voice, you expect human-like judgment, but AI just finds patterns.
00:08:45: If we trust it too much, thinking it thinks like us, we misuse it.
00:08:49: We miss its unique pattern spotting strength.
00:08:51: That misplaced trust is a big risk.
00:08:53: And the risks are very real on the security side too.
00:08:57: Kashifa highlighted secure prompt engineering as a critical skill now.
00:09:00: And Bob Carver found something quite alarming.
00:09:03: One in five organizations have already had a serious cybersecurity incident because of AI-generated code.
00:09:08: One in five.
00:09:09: That's huge.
00:09:10: It really is.
00:09:10: And we also need to look beyond just the business risks.
00:09:13: Patricia Bertini shared a really important perspective from Indigenous communities.
00:09:17: They see AI potentially acting like a colonial-like arrival system, just absorbing knowledge identity without real consent.
00:09:25: It underscores why building culturally grounded AI, like Mongolia's Agoon AI, is so vital.
00:09:31: Yeah.
00:09:31: And that erosion of trust is happening elsewhere, too.
00:09:33: That BBC ABU study James Fletcher mentioned found forty-five percent of AI-generated news answers had significant flaws.
00:09:41: When AI gets critical stuff wrong, trust in information itself just plummets.
00:09:46: So pulling it all together, what we saw these past two weeks is really an industry trying to shift gears, moving past just experimenting.
00:09:53: Governance is now table stakes globally.
00:09:56: Massive bets are being placed on sovereign infrastructure to break cloud dependency.
00:10:00: And the focus is clearly moved to figuring out this human AI collaboration piece.
00:10:04: Which leads us to a final thought, something for you to chew on.
00:10:07: Looking long term, as AI gets better and better at tasks we thought were uniquely human, even creative ones, like making art.
00:10:14: Does human work lose its meaning, or does it just shift profoundly?
00:10:17: The Lidio Pereira suggests.
00:10:18: the real challenge here isn't just technical, it's moral, it's philosophical.
00:10:22: It's about redefining what being human actually means in an age of intelligent machines.
00:10:26: A really deep question to end on, and definitely one everyone in this field needs to think about.
00:10:31: If you enjoyed this deep dive, new episodes drop every two weeks.
00:10:34: Also check out our other editions on ICT and tech, digital products and services, cloud sustainability and green ICT, defense tech and health tech.
00:10:43: Thanks so much for tuning in, and please do subscribe and follow for more deep dives like this one.
New comment