Best of LinkedIn: NVIDIA GTC 2026

Show notes

We curate most relevant posts about Digital Transformation & Tech on LinkedIn and regularly share key takeaways.

In this edition, NVIDIA GTC 2026 conference marked a pivotal transition from AI experimentation to a new industrial revolution of intelligence production. Central to this shift is the Vera Rubin platform, a next-generation architecture designed to handle the massive processing demands of agentic AI and large-scale inference. NVIDIA also introduced NemoClaw and OpenClaw, positioning these as the essential "operating systems" for autonomous digital workers that can plan and execute complex tasks. Keynote speaker Jensen Huang redefined the economy of computing, describing data centres as "AI factories" and identifying tokens as the primary unit of value. Beyond hardware, the event showcased physical AI through advanced robotics and teased future infrastructure for space-based computing. Ultimately, the sources suggest that success in this era requires integrating energy efficiency, structured data, and human character into a cohesive global ecosystem.

This podcast was created via Google NotebookLM.

Show transcript

00:00:00: This episode is provided by Thomas Allgeier and Frennis, based on the most relevant LinkedIn posts about NVIDIA GTC.

00:00:06: twenty-twenty six.

00:00:07: Frennes supports enterprises with market and competitive intelligence decoding emerging technologies customer insights regulatory shifts in competitor strategies.

00:00:17: so product teams and strategy leaders don't just react but shake the future of AI

00:00:21: And man, shaping the future of AI is exactly what we need to talk about today.

00:00:26: Because the sheer scale of what's happening right now it was kind of hard to comprehend.

00:00:30: Yeah!

00:00:30: It really is and that's our mission for you.

00:00:33: in this deep dive We are stepping into a high level strategy room basically unpacking absolute top trends & insights from NVIDIA GTC And we're curating this specifically from the feeds of ICT and tech industry professionals over on LinkedIn.

00:00:49: So no fluff, just a real signal from the noise

00:00:51: Exactly!

00:00:52: If you are listening to this You really need understand why The fundamental unit value in Tech is shifting right beneath our feet.

00:01:00: We literally can't discuss future AI software or robotics without first looking at hardware layer

00:01:06: Because it's a tectonic shift.

00:01:08: I mean, just imagine standing in front of the demand pipeline so massive that it fundamentally breaks how we measure industrial output.

00:01:15: Yeah one trillion dollars right?

00:01:18: That is the projected order pipeline for Nvidia new Blackwell and the Vera Rupin platforms through twenty-twenty seven Which

00:01:24: Is Just A Staggering Number To Try And Wrap Your Head Around.

00:01:27: It Is David Daniszewski In Charles K. We're Actually Highlighting This On Their Feeds This Week.

00:01:32: Its Just One Trillion Dollars In Pure Raw infrastructure demand.

00:01:36: Wow,

00:01:37: okay.

00:01:37: so why the sudden massive explosion in infrastructure?

00:01:42: Well to understand that we have to look at what Karen Yu and McHugh Suley are calling the inference inflection

00:01:48: The afference inflection.

00:01:48: Yeah.

00:01:49: So for years the entire industry was just obsessed with the training phase right like pouring oceans of data into these massive GPU clusters Just to teach the AI how-to thing.

00:01:58: Right!

00:01:59: The learning phase

00:02:00: Exactly.

00:02:01: But that phases well it's maturing.

00:02:03: We've officially moved from the era of training models to the era executing them at scale.

00:02:08: Which completely changes the physical footprint of whole tech industry, you know when we historically thought about industrial revolutions?

00:02:16: We always pictured smokestacks right assembly lines stamping metal into car doors.

00:02:21: yeah very physical exactly.

00:02:23: You had a clear physical input and a physical output.

00:02:27: And then came The Digital Age and we built these traditional data centers.

00:02:32: But looking at what Laura Sophie Bother, Adrian McDonald and Santosh Pandey pointed out from the GTC floor those traditional data centers are kind of officially dead.

00:02:41: They're completely dead.

00:02:42: I mean a traditional data center was essentially just a massive barking garage.

00:02:46: your data Just sat there waiting to be retrieved.

00:02:49: right?

00:02:49: it was passive

00:02:50: totally passive.

00:02:51: but The new model these AI factories that Jensen Wong describes they don't store data.

00:02:56: They actually manufacture intelligence.

00:02:57: Oh

00:02:58: like.

00:02:59: So if the old data centers were parking garages, these new AI factories are continuous assembly lines.

00:03:04: Yes The smokestacks are now liquid cooling towers...the raw material is electricity and they actual widgets rolling off the belt or tokens

00:03:12: Exactly!

00:03:13: And whole metric of economic value has shifted.

00:03:15: We aren't talking about gigabytes anymore.

00:03:17: The metrics that matter now Are tokens per watt and costs per token.

00:03:21: But manufacturing those tokens so incredibly fast creates a physical bottleneck, right?

00:03:26: A massive

00:03:26: one because when a large language model is generating an answer spitting out words token by token it absolutely starves for memory bandwidth.

00:03:36: Okay let me see if I can break this down mechanically for a second.

00:03:39: If you are spitting up tiny pieces of information sequentially standard memory which usually grabs large heavy chunks of data all at once kind of slowly that would create a massive traffic jam.

00:03:51: You need a system that can continuously fire off tiny data bits at lightning speed.

00:03:56: That is the exact mechanical problem that NVIDIA solved, and honestly it's why their twenty billion dollar acquisition of GROC was basically the technical unlock of the entire conference.

00:04:07: Oh wow!

00:04:07: Yeah Anastasia Nisova and Padma Pavan Venomatla really detailed this on LinkedIn.

00:04:12: they explained how NVIDia integrated GROCs LPU technology to basically disaggregate the entire inference process

00:04:19: meaning they split it up to bypass those traditional bandwidth ceilings.

00:04:23: Precisely, yeah.

00:04:25: so NVIDIA handles the heavy compute which is called a pre-fill phase.

00:04:28: okay that's where it digests your massive complex prompt and does the deep mathematical lifting but once it understands the request It hands the task over to grokks

00:04:39: lpu Which handles high speed decode

00:04:42: Exactly because it's an entirely different architecture that just fires out those tokens without hitting a wall.

00:04:47: It's basically like having an industrial crane lift A massive boulder of raw material onto the factory floor and then instantly dropping it on to high-speed Frictionless conveyor belt, which shoots the finished widgets Out The Door.

00:05:00: That is great way to picture.

00:05:01: And that manufacturing of Tokens connects directly To what they are actually doing Because all this massive inference power Is fueling our transition from generative AI chatbots to agentic AI.

00:05:14: Right, digital workers!

00:05:15: Exactly we're seeing Open Claw having its Linux moment.

00:05:19: Jensen Wong actually stated that every company needs an open-claw strategy now because agents are becoming the new operating system for the enterprise.

00:05:28: Yeah Ronald Van Loon Michael Strickland and Stephanie Goodos were all discussing that shift.

00:05:32: it's moving from sauce to gas right?

00:05:34: Yes

00:05:35: Irithyth Kumar SS and Julian Medonca highlighted this perfectly.

00:05:39: It's the shift from software as a service to generative or egentic AI, as a Service

00:05:47: where instead of you clicking the buttons and pulling the levers You just define The goal in the AI agent executes all the steps

00:05:52: entirely In the background.

00:05:54: okay But I have To push back on This enthusiasm for fully Autonomous Agents For A second.

00:05:58: oh sure because Vivek Tandipani brought up a really good point about this.

00:06:01: If these agents are operating in multi-step execution loops, where a tiny error on like step two can compound massively before human ever detects it by step twenty.

00:06:13: Aren't we building massive terrifying control problem?

00:06:16: Oh!

00:06:16: It's huge concern.

00:06:17: Like how do govern something that executes faster than human oversight?

00:06:21: Well That is the exact tension enterprise market wrestling with right now.

00:06:25: Because you're Right Agents have dangerous freedoms.

00:06:28: They can run code, they access payroll files and send emails

00:06:31: which is a compliance nightmare

00:06:33: A total nightmare.

00:06:34: And that's why the enterprise security layer NEMA CLAW was such critical announcement.

00:06:38: Premdurendes Axel Ditman and Sanjay Basu were all over this on LinkedIn.

00:06:43: So what does NEMA Claw actually do to fix it?

00:06:46: It provides essential out of process sandboxing policy routing privacy guardrails.

00:06:52: Prem Noreen has actually noted that you can't just throw an agent in a traditional Docker container and call it safe.

00:06:58: Right, because the container protects the computer but doesn't govern what the agent decides to do inside of the network.

00:07:04: Exactly!

00:07:05: NemoClaw acts as external enforcement engine so enterprises can deploy these agents safely without them going rogue.

00:07:12: Even with security layers solved though there's still massive UX reality check here.

00:07:17: Valias Cajal noted something interesting about this.

00:07:19: What

00:07:19: did he say?

00:07:20: He pointed out While capabilities might get agents deployed in an enterprise, it's personality and trust that keep them used.

00:07:27: Oh!

00:07:27: That's incredibly true

00:07:28: because most agent demos right now are basically just command lines dressed up as chat interfaces.

00:07:35: if the digital worker doesn't have a bedside manner humans just reject It.

00:07:40: yeah.

00:07:40: And that trust becomes even more critical when we start talking about these autonomous systems stepping out of the web browser and interacting with The physical world.

00:07:48: okay yes Because building on the idea of agents executing digital tasks, what happens when they actually enter our physical space?

00:07:56: Well, Physical AI has officially arrived.

00:07:59: Pascal Bornet and Nidhi Warankar were highlighting Disney's Olaf robot... Did you see this?

00:08:04: I did!

00:08:05: The little walking robot

00:08:06: right?!

00:08:06: Yeah

00:08:07: but the crazy thing is it wasn't manually animated.

00:08:10: It was trained in a physics simulation first

00:08:13: Using Nvidia's Isaac Sim and Newton Physics engine.

00:08:17: Exactly

00:08:18: It had to literally learn how to balance and walk in a digital twin before they ever brought it into life,

00:08:24: which is mind-blowing.

00:08:25: But Tom Emmerich had a really grounded take on this.

00:08:28: He called it robot theater.

00:08:29: Robot

00:08:29: Theater?

00:08:30: Yeah, he was discussing how these cute entertainment focused robots are necessary stepping stone for marketing and consumer acceptance.

00:08:38: Oh I see

00:08:38: like before we see wide-scale industrial deployment of humanoid robots in warehouses.

00:08:44: We need the public to get comfortable with them.

00:08:46: that makes total sense yeah.

00:08:48: But to actually support that wide-scale deployment at the edge, you need critical infrastructure.

00:08:54: And that is where telecom is radically evolving.

00:08:56: Yeah Jordan Barr and CeCe Chong were discussing how cell towers are becoming robotic AI radios.

00:09:02: Right

00:09:02: creating an AI grid.

00:09:04: Telcos are totally transforming from just selling bandwidth dollars per gigabyte distributed AI compute grids that sell inference.

00:09:14: So if we draw a biological analogy here for the listener... Oh, let's hear it!

00:09:18: If the AI factory we talked about earlier is The Brain and the Agentec software is The Mind then the telecom-AIRN networks are basically becoming the nervous system.

00:09:28: Oh, that's perfect!

00:09:29: Instantly transmitting reflexes down to the muscles which are the physical AI robots at the edge?

00:09:34: Yes

00:09:35: because as Guy in Correlage provided AI isn't going to break telecom networks through bandwidth.

00:09:42: Tokens are actually tiny, eta-wise...

00:09:44: Right

00:09:45: But what it will stress is latency the uplink and this constantly always on behavior.

00:09:50: You need that nervous system to be lightning fast.

00:09:53: but synthesizing all of this you can design The best ai grid in the most advanced robots in the world?

00:09:58: They require massive amounts power right?

00:10:00: yeah And That Is the absolute bottleneck Of This Intelligence explosion.

00:10:05: Yeah, Maria Cortez noted that while AI is scaling its silicon speed power Is only scaling at grid speed

00:10:12: and grid speed as agonizingly slow

00:10:14: Extremely slow which is why some players are just bypassing the traditional grid entirely.

00:10:18: Daniel Shapiro shared how knee scale was building an eight gigawatt Islanded a i microgrid in West Virginia.

00:10:24: eight gigawatts An islanded microgrid so they aren't even relying on the public utility.

00:10:29: Nope entirely self-sufficient power Just to feed the compute.

00:10:33: And to manage that scale, Schneider Electric is actually developing digital twins of these gigawatt-scale factories.

00:10:40: Wow!

00:10:41: Deepak Sharma and Manish Gokal explained how they simulate the cooling and power dynamics in software before they even pour the concrete.

00:10:50: So we are simulating the power plants that will run the simulated factories that train the simulated robot.

00:10:56: Exactly

00:10:56: it's turtles all the way down.

00:10:59: But you know amidst this infrastructure talk There is a massive human element here.

00:11:04: Oh, for sure!

00:11:04: Stephanie Godos pointed out huge shift happening in Silicon Valley.

00:11:08: Employees are starting to negotiate token budgets instead of just salaries.

00:11:12: Wait really?

00:11:12: Token Budgets Yes

00:11:14: Because if you have a massive compute budget You can amplify your own output ten X and become one person department.

00:11:20: So completely changes.

00:11:21: what makes an employee valuable?

00:11:23: Who's Zhang actually articulated this beautifully?

00:11:25: What

00:11:25: do ya say?

00:11:26: He said that because AI amplifies technical execution so well, the true hard skills of the AI era are now totally intangible.

00:11:33: Oh like what?

00:11:34: Like curiosity having a moral compass and the courage to

00:11:37: act.

00:11:38: Wow That's powerful.

00:11:40: So I really want to pose this directly to you The listener think about your own career for a second.

00:11:46: If AI democratizes technical execution and levels the playing field of hard knowledge, does that mean your most valuable professional asset?

00:11:54: moving forward is simply your judgment?

00:11:56: Your character.

00:11:57: And you're humanity?

00:11:59: it absolutely doesn't and we need that humanity in the room which brings up a really sobering reality check That Anita Pandy delivered from the GTC floor.

00:12:08: right diversity warning.

00:12:09: exactly Amidst all these technological triumphs, she noted that fewer women are present in critical rooms shaping the future.

00:12:17: which signals a massive systemic failure.

00:12:19: Yes,

00:12:20: a failure to retain the brilliant minds that we desperately need to guide this technology.

00:12:25: We need diverse visionary leadership if you're going to navigate it safely.

00:12:28: That

00:12:29: is such critical point of land on!

00:12:30: We've covered so much ground today from AI factories manufacturing tokens to agentic digital workers to physical robots and eight gigawatt micro grids.

00:12:39: It's completely new world.

00:12:41: It really is.

00:12:42: And as we wrap up, I want to leave you with a final provocative thought to mull over on your own... Let's

00:12:47: hear it!

00:12:47: ...as we shift to an era where intelligence has continuously manufactured in these AI factories What happens when the cost of generating a brilliant answer becomes cheaper than electricity required power-the-server?

00:12:59: Oh wow In a world of infinite answers The only true scarcity left will be ability to ask right questions.

00:13:06: That Is A Wild Thought To End On.

00:13:08: If you enjoyed this episode, new episodes drop every two weeks.

00:13:12: Also check out our other editions on ICT and Tech Digital Products & Services Artificial Intelligence Cloud Sustainability in Green ICT DefenseTech and HealthTech.

00:13:22: Keep

00:13:23: asking those right questions.

00:13:24: Thank You so much for joining us And don't forget to subscribe.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.