Best of LinkedIn: Artificial Intelligence CW 17/ 18
Show notes
We curate most relevant posts about Artificial Intelligence on LinkedIn and regularly share key takeaways. We at Frenus support ICT & Tech providers with AI ecosystem strategy through delivering independent vendor assessments, build-vs-buy analysis, and ecosystem intelligence that prevents costly missteps and strengthens competitive positioning. You can find more info here:https://www.frenus.com/usecases/ai-ecosystem-strategy-vendor-selection-partnership-due-diligence-build-vs-buy-analysis
This edition provides a comprehensive overview of the AI governance and agentic infrastructure landscape as organizations prepare for the EU AI Act’s full enforcement in August 2026. The collection highlights a critical gap where 56% of companies lack formal governance, emphasizing that compliance must shift from manual checklists to "compliance as code" woven directly into technical architecture. Experts detail the transition from simple chatbots to complex agentic systems, noting that the majority of value lies in the operational harness rather than the underlying model itself. Strategic insights warn that productivity gains often redistribute pressure rather than reducing it, requiring leaders to move beyond pilots and establish a Center of Intelligence to fix broken data foundations. Ultimately, the text argues that successful AI adoption is a leadership and culture challenge that demands clear accountability, ethical maturity, and a focus on business impact over technological hype.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This episode is provided by Thomas Allgaier and Frennus, based on the most relevant LinkedIn posts about artificial intelligence from calendar weeks seventeen and eighteen.
00:00:09: Frenna supports ICT in tech providers with AI ecosystem strategy By delivering independent vendor assessment build versus buy analysis And ecosystem intelligence that prevents expensive mistakes and positions The provider's competitively.
00:00:22: you can find more info In the description.
00:00:24: yeah i'm so glad You're joining us for this deep dive.
00:00:26: we've got a lot to cover today.
00:00:27: We really do.
00:00:28: Yeah, and uh I want to start by throwing a stat at you because it completely rewired how i view this whole landscape.
00:00:35: Okay let's hear it.
00:00:36: so what if I told You that in the most advanced like cutting-edge AI agents operating today?
00:00:42: The actual ai model the brain That everyone is always hyping up only makes Up about one point six percent of the systems code.
00:00:48: wait One point six per cent that's I mean.
00:00:50: that's Completely jarring right especially Because Enterprises are pouring billions of dollars into obsessing over these foundational models.
00:00:58: It's like they're staring at the wrong part of the machine entirely
00:01:01: Exactly, and that one point six percent figure really sets the perfect stage for our mission today.
00:01:07: So you listening we're taking a deep dive in to the top AI trends That surfaced across LinkedIn Over calendar week.
00:01:13: seventeen and eighteen
00:01:15: Specifically curated for digital transformation professionals In the ICT and tech industry.
00:01:19: right so no fluff.
00:01:21: we are moving way past the hype of what a chatbot can write.
00:01:26: We're getting into mechanics on how AI is actually executed, governed and orchestrated at real enterprise
00:01:32: scale.".
00:01:40: really hard pivot from basic experimentation into the brutal reality of architecture.
00:01:46: Data
00:01:46: readiness too?
00:01:47: Yeah,
00:01:47: and a very real cognitive toll this kind of speed is taking on human workers!
00:01:51: So let's just jump straight in to the friction but lets start with AI governance at risk.
00:01:56: Oh yeah...this stuff everyone loves to ignore.
00:01:58: Right historically governance as treated as this nuisance.
00:02:01: you know A policy document that was drafted by the very end of project.
00:02:04: Just keep legal department off your back.
00:02:08: But the consensus now, and we saw this across multiple posts is that governance is shifting from an afterthought to a baseline architectural requirement.
00:02:17: I mean The runway for treating it as an after thought has just gone.
00:02:21: Shrikant Balusani actually put out this urgent warning regarding the EU AI
00:02:26: Act.
00:02:26: Oh, I saw one.
00:02:27: Yeah first systems classified at high risk.
00:02:30: the full requirements take effect in August.
00:02:32: twenty-twenty six
00:02:33: which sounds far but It's really not.
00:02:35: no enterprise cycles.
00:02:37: That gives organizations roughly three months to get their infrastructure ready for the compliance phase.
00:02:43: Just
00:02:43: Three Months Wow,
00:02:44: and Balusani clarifies that this isn't just about surviving an annual audit anymore.
00:02:50: The legislation demands continuous monitoring.
00:02:52: it demands immutable audit trails.
00:02:54: you have to prove what the system did
00:02:56: constantly.
00:02:56: Exactly, technically enforced human in the loop processes and the penalties are massive.
00:03:01: we're talking up to thirty five million euros or seven percent of global turnover?
00:03:05: Seven percent.
00:03:06: that is company ending kind of money
00:03:08: it really is.
00:03:09: but I want a pause on the geographic aspect of this because i constantly hear us-based tech providers just sort of brushing this off.
00:03:17: oh totally.
00:03:17: they think their safe.
00:03:18: right they assume because there in north america.
00:03:24: But Matthew DiMotto completely dismantled that assumption.
00:03:28: He pointed directly to Article II of the Act.
00:03:30: Ah,
00:03:30: Article II!
00:03:31: That's the trapdoor for global tech?
00:03:33: It really is.
00:03:34: DiMoto explains there was absolutely no exemption from U.S.
00:03:37: companies if the output of AI system reaches an end user in the European Union.
00:03:43: So you're running a predictive model on server say Ohio.
00:03:47: Right but bank teller Berlin uses this input to approve alone.
00:03:52: Your entire system isn't scope.
00:03:54: The technical debt required to comply with.
00:03:56: that is just staggering.
00:03:57: I mean, you're essentially asking developers to bolt brakes onto a car while it's doing one hundred miles an hour down the highway.
00:04:03: Right.
00:04:04: So how do organizations practically do this?
00:04:05: How do you achieve continuous monitoring without completely killing innovation?
00:04:10: Well!
00:04:10: This where Philip Braun and Venkat Chaturie introduced a concept that fundamentally changes operating model.
00:04:17: They call it Compliances Code.
00:04:19: Compliance is code.
00:04:20: I like the sound of that.
00:04:21: Yeah, Chattery argues that compliance has no longer a policy problem That sits in a PDF somewhere.
00:04:27: It's a workflow problem.
00:04:28: it has to live inside The actual code base.
00:04:32: so what does that look Like?
00:04:33: practically
00:04:33: okay?
00:04:33: Let's visualize it.
00:04:35: if A high-risk AI system makes a decision say it flags a financial transaction You can't manually audit that three months later if the decision pathway wasn't recorded in real time.
00:04:46: Right, The data is already gone or changed.
00:04:48: Exactly.
00:04:49: So.
00:04:49: compliance as code means developers are writing hard triggers right into software.
00:04:53: If the AI's confidence score drops below a certain threshold, The code automatically forks that decision into human review queue.
00:05:00: Oh That smart!
00:05:01: And it logs entire state of machine in an unalterable database.
00:05:05: right then and there.
00:05:06: if you don't build that auditability Into architecture from day one proving human oversight is just mathematically impossible
00:05:13: Which naturally leads to question who actually owns this mechanism Because legal understands policy but they definitely don't write code.
00:05:22: Right, They're not in the repos
00:05:23: and IT writes the code But their entirely focused on feature delivery and sprints.
00:05:28: So Alden Meller brings up this fascinating structural observation.
00:05:32: He says quality assurance.
00:05:33: QA is actually The most perfectly positioned function to own AI governance.
00:05:37: You
00:05:38: know that makes complete sense when you think about it.
00:05:40: qa already handles impact assessments?
00:05:43: bias testing risk policies for traditional software, they literally sit right at the gate between development and production.
00:05:49: Exactly yet nobody is handing them the reins or the budget to govern AI models.
00:05:56: legal points to IT.
00:05:57: it moves on to the next sprint in.
00:05:59: QA has just left testing for basic software bugs while the AIs behaving completely unpredictably
00:06:05: which really exposes a much deeper systemic failure.
00:06:08: Because you can't encode immutable audit trails or automated compliance triggers.
00:06:12: if the underlying infrastructure is just a chaotic mess, You cant govern what you cannot see.
00:06:17: Yeah and that brings us perfectly to our second theme Enterprise AI Execution in The Data Foundation.
00:06:24: And there's a stat here from Lyra Shramm That should honestly keep every tech executive awake at night.
00:06:30: I think i know which one your talking about.
00:06:32: Probably After all the hype, All The Investment, sixty percent of organizations deploying AI still report zero enterprise-wide EBIT impact.
00:06:40: Zero improvement to their earnings before interest in taxes?
00:06:44: Absolutely zero!
00:06:46: Billions of dollars in compute licensing consulting and it's resulting in zero financial gain on the balance sheet, right?
00:06:52: And Faisal Hoca attributes this directly to a vacuuming leadership.
00:06:56: He says executives are basically treating AI implementation like a standard IT vendor
00:07:01: upgrade.
00:07:01: just buy a license and hand it off.
00:07:02: Yeah They buy it handed to tech team.
00:07:04: It is wait for magic to happen.
00:07:06: Okay actually contrasted this with Walmart recent moves Wal-Mart pulled out of a massive deal With open AI so they could pivot and build their own internal agentic systems.
00:07:15: That is such a strong signal of active in-the-weeds leadership.
00:07:19: They realized AI isn't just software you rent, it's fundamental business transformation—you have to own it!
00:07:26: Exactly.
00:07:27: and the reason renting off the shell software isn't moving the financial needle?
00:07:31: It comes down to data.
00:07:32: Doug Shannon made great point that AI doesn't break enterprise systems —it exposes them.
00:07:37: Oh…that
00:07:37: was so good way because if your company relies on scattered... spreadsheets, siloed departments undocumented legacy databases.
00:07:46: An AI model is just going to hallucinate.
00:07:48: based on that fragmented reality the
00:07:50: AI Is basically a magnifying glass for your organizational dysfunction.
00:07:54: precisely Shannon suggests organizations have to build a center of intelligence Before you even prompt an LLM.
00:08:00: You have to structure govern and correlate your internal data?
00:08:03: Yeah
00:08:05: And Sandra Vuller echoed this specifically for small and medium enterprises.
00:08:09: She pointed out that, for SMEs getting their cloud ERP—their enterprise resource planning systems migrated in unified is just non-negotiable homework!
00:08:18: Let me frame it for you listening... Cloud ERP centralizes a company's financials supply chain operations into one living database.
00:08:28: Without that, buying an advanced AI model to run on silo dirty data is like buying a Formula One car and trying to drive it on a gravel road.
00:08:37: Yeah It doesn't matter how fast the engine is.
00:08:38: if the wheels can't grip anything The engine will literally just tear the car apart.
00:08:43: Exactly And
00:08:44: Borger had some really sharp criticism regarding this exact dynamic.
00:08:48: He warned that leaders need stop performing AI fluency.
00:08:51: Performing Fluency?
00:08:53: What does actually look in the wild?
00:08:55: Well, it looks like a manager bookmarking fifteen different AI tools every week skimming headlines about the latest models and just dropping buzzwords in strategy meetings while completely avoiding the grueling work of data cleanup.
00:09:06: Right?
00:09:06: The real
00:09:07: work.
00:09:07: Yeah.
00:09:08: Goura advises abandoning the scattergun approach.
00:09:11: stop playing with everything.
00:09:12: pick one specific high volume workflow go incredibly deep clean the specific data feeding that workflow apply them model And then actually measure the output.
00:09:21: because once you finally do Once the data is clean and architecture secure, we stop talking about AI as an autonomous worker that completes tasks.
00:09:36: Which leads right into our third theme – Agentech AI & Developer Workflows.
00:09:40: This is the big shift from AI as a tool to AI an actor.
00:09:44: Yes, Maggie Hott shared very tangible example of this.
00:09:48: She integrated OpenAI codecs into her workflow To tackle one hundred and seventy unread emails One-seventy.
00:09:54: Yeah.
00:09:55: And agent didn't just write generic summaries.
00:09:57: It analyzed context for threads Pulled relevant data from company's internal documents Checked calendar to propose realistic meeting times And drafted one hundred seven highly specific replies.
00:10:08: That's incredible!
00:10:08: And got it in box zero while she was literally walking on her treadmill.
00:10:12: See, to understand how a system actually achieves that level of autonomy we have to look under the hood at The Architecture.
00:10:17: Priyanka S mapped out the anatomy of these AI agents identifying fifteen distinct layers.
00:10:22: Fifteen?
00:10:23: Yeah ranging from raw compute and database layers all the way up through memory management tool routing and orchestration And Gabriel Millian presented similar framework Outlining seven layers modern AI ecosystems.
00:10:38: Both of those frameworks expose the same secret, which takes us back to that hook I mentioned at the top of The Deep Dive.
00:10:45: It
00:11:00: can process information, it can reason.
00:11:03: But has no hands to type No eyes read a database and short term memory To remember what just did five minutes ago.
00:11:18: This
00:11:19: is where that one point six percent stat comes from
00:11:21: And the other ninety eight point four percent
00:11:28: which is over half a million lines of custom code was what they called the harness.
00:11:32: The harness, explain how that functions in this context?
00:11:35: So the harness It's the orchestration layer that allows the model to access a web browser, query database store variables in memory and impose guardrails so it doesn't accidentally delete production server.
00:11:51: Right kind of important very
00:11:52: important.
00:11:53: Steve Nari highlighted building this harness is actually competitive moat for enterprises because anyone can run an API call into smart models but buildin' That is incredibly difficult.
00:12:09: And Brie Kusher-Pandy expanded on how Anthropic structures this for developer workflows.
00:12:15: He explained, you shouldn't view Claude Code as a single digital assistant.
00:12:19: You have to view it as an entire engineering team operating in a loop.
00:12:22: Yeah It starts with the core instruction file The CLAWUE.MD file Which acts as project's central nervous system.
00:12:29: Then you stack skills Event driven hooks and subagents On top of that.
00:12:33: Exactly Let's break that down.
00:12:35: A developer assigns task.
00:12:36: The main agent reads the C-LawUDE.md file.
00:12:39: to understand rules.
00:12:41: It delegates writing code into a subagent Once it is written, an event driven hook automatically triggers another SubAgent to run test.
00:12:49: If the test fails, the loop sends back to first SubAgen to fix it.
00:12:54: It just iterates continuously without human ever touching keyboard.
00:12:57: That's wild.
00:12:59: But the challenge then becomes, how does a human manager oversee dozens of these autonomous loops without just losing their mind?
00:13:06: Right.
00:13:07: Shobham Sabu pointed to new open source orchestrator called Symphony which offers brilliant solution.
00:13:13: Symphony uses linear tickets as its control plane
00:13:16: And for those outside software development Linear is popular project management tool where developers track bugs and feature requests via tickets.
00:13:23: So symphony treats them as command center.
00:13:26: Instead of a human trying to monitor five different AI-turnal windows, the orchestrator reads the linear ticket, dispatches the work to the appropriate AI agent manages the context window and automatically restarts the agent if it crashes.
00:13:39: It basically
00:13:40: functions as a digital middle manager for your digital workforce.
00:13:44: The scale that automation is just breathtaking.
00:13:47: But it immediately raises a massive secondary issue.
00:13:51: if we have thousands of digital middle managers running infinite loops of sub agents The computational cost must be astronomical.
00:13:57: Oh,
00:13:57: absolutely.
00:13:58: And the pressure on the human workers who have to deploy all this must be breaking them which shifts us to our final cluster infrastructure verticalization and the Human element
00:14:09: because the physics of enterprise adoption are entirely dictated by the cost of compute.
00:14:14: If the agents bankrupt you You can't scale them.
00:14:17: Right, but Armand Ruiz brought some incredibly disruptive news on the cost front.
00:14:22: The release of DeepSeq V-Four has completely altered the unit economics of AI inference.
00:14:27: Yeah, deepseq is matching the performance of top closed source models But doing it at roughly one twentieth price
00:14:33: One twentieth.
00:14:34: that changes math for every CTO in the planet.
00:14:36: It really does.
00:14:37: If an enterprise budgeted for four months of autonomous agent operations, a ninety-five percent reduction in inference costs means that same budget now lasts over six years!
00:14:48: it takes agentic workflows from luxury experiments and turns into baseline utility.
00:14:54: And governments are definitely recognizing this infrastructural arms race.
00:14:58: Nicholas Babin noted Europe isn't merely focused on regulating AI anymore.
00:15:02: they're heavily funding the physical hardware.
00:15:05: Yeah, they're investing two point six billion euros into AI factories and outsize scale supercomputers like the Jupiter project in Germany.
00:15:13: They are chasing compute sovereignty because relying entirely on North American server farms for the cognitive infrastructure of your entire economy is an existential risk.
00:15:22: Europe is building the capacity to run these massive agentic models on their own soil
00:15:26: And as that compute gets cheaper and faster The enterprise software built on top it heavily verticalizing.
00:15:32: It's no longer just generic chatbots.
00:15:34: Simon Taylor pointed this out with Anthropics recent aggressive move into the financial sector.
00:15:38: They launched ten finance specific agents right out of
00:15:52: and month-end financial close operations.
00:15:54: They also partnered in a massive joint venture with Blackstone to embed these tools directly into private equity workflows,
00:16:02: And Denise Holland Dresser highlighted similar vertical integration on an even broader scale.
00:16:07: GPT-Five point five is now living natively inside Microsoft Excel and Google Sheets.
00:16:13: It doesn't sit in the separate browser tab anymore.
00:16:15: No it sits right inside the spreadsheet.
00:16:16: It writes macros audits financial formulas pivots data autonomously
00:16:21: which is amazing, but this where we really have to confront the reality of The Human Worker.
00:16:27: With agents in our spreadsheets or code bases and emails are human workers actually being freed up?
00:16:32: Or were just getting crushed by speed at all?
00:16:44: It's profound cognitive fatigue.
00:16:47: When an employee is actively doing a task like writing a report or coding a feature, they are in a flow state.
00:16:54: but when AI takes over the execution The human is relegated to the role of an editor or supervisor constantly monitoring output of incredibly fast machine verifying decisions made by agents and context switching between different tools.
00:17:10: it just exhausts our cognitive capacity.
00:17:12: BCG found that this supervisory role actually increases decision fatigue and leads to a spike in workplace errors.
00:17:19: Because reviewing code written by machine requires the totally different type of mental energy than writing it yourself
00:17:25: Exactly, And Mark Byer Schoder issued a stark warning about this.
00:17:29: He stated that AI productivity games do not reduce pressure on an organization.
00:17:34: they redistribute It.
00:17:35: The bottleneck simply moves down the pipeline.
00:17:37: Yeah
00:17:37: If your new agentic developer team rates software five times faster, you're human QA tester still has to review and deploy that code five time faster.
00:17:46: The speed of the AI creates unprecedented demand on humans' systems surrounding
00:17:49: it.".
00:17:49: So
00:17:50: how do professionals actually survive this?
00:17:52: Pascal Bornet and Kassi Kazierkov outline a profile for the worker who will thrive in this environment.
00:17:58: Yeah!
00:17:59: Bornet argues pure specialists are highly exposed because AI can learn narrow rule-based domains instantly... But shallow generalists are also exposed because AI can summarize broad topics better than any human.
00:18:12: So who wins?
00:18:13: The survivors, according to Bornett ,are the translators.
00:18:17: These professionals possess deep technical knowledge of a specialist but this systemic vision of a generalist.
00:18:22: They understand what the business problem is and they apply deep human judgment to bridge that gap.
00:18:33: She said that in the egentic era, a professional's competitive edge is no longer measured by their physical output or how many hours they grind.
00:18:40: The new metric of success... ...is deliberate attention.
00:18:43: Deliberate attention?
00:18:44: I love that!
00:18:45: It's the ability to know exactly where to direct your focus when machines are doing heavy lifting.
00:18:50: In this era, A Leader's Edge isn't doing more work it's steering the ship.
00:18:55: Steering the ship becomes infinitely more valuable than rowing the oars.
00:18:59: We have unpacked a tremendous amount of insight today bringing it all back to your own AI journey, I want to leave you with a final provocative thought to
00:19:07: mull over.
00:19:08: Yeah from Mike Kent's
00:19:09: right?
00:19:09: My kids go had this idea that when we look at deploying these powerful systems, We shouldn't evaluate them merely through the lens of a compliance checklist or an efficiency metric
00:19:19: about maturity
00:19:20: exactly evaluating our use of AI Through the lens Of organizational maturity.
00:19:25: The question is not just did the agent finish the task faster?
00:19:29: the deeper question we must ask ourselves Is Did deploying this tool force us to deliver the most developed, highly functioning version of ourselves alongside it?
00:19:38: That
00:19:39: framing demands that technology serves our elevation rather than merely servicing its automation.
00:19:57: Thank you for spending your time with us on this deep dive.
00:20:00: Subscribe to The Feed, protect your cognitive bandwidth and we will see ya in the next one!
New comment