Best of LinkedIn: CES 2026
Show notes
We curate most relevant posts about Digital Transformation & Tech on LinkedIn and regularly share key takeaways.
This edition provides a comprehensive overview of CES 2026, highlighting a pivotal shift as artificial intelligence moves from digital software into physical hardware and industrial infrastructure. NVIDIA leads this transition with the launch of its Vera Rubin platform and Alpamayo, an open-source model designed to give autonomous vehicles human-like reasoning capabilities. Major players like Samsung, Lenovo, and Intel also unveiled AI-native devices, including tri-fold smartphones and next-generation processors, while Boston Dynamics introduced a production-ready Atlas robot for industrial use. The automotive sector features prominently, with Sony Honda Mobility and Mercedes-Benz showcasing vehicles that function as sophisticated edge-compute platforms. Beyond individual gadgets, the event emphasizes innovation convergence, where AI, robotics, and connectivity integrate into healthcare, smart homes, and global supply chains. Overall, the reports suggest that 2026 marks the era of Agentic AI, focusing on systems that proactively act and solve problems in the real world.
This podcast was created via Google NotebookLM.
Show transcript
00:00:00: This episode is provided by Thomas Allgeier and Frennis, based on the most relevant LinkedIn posts about CES.
00:00:06: Frennis supports ICT enterprises with market and competitive intelligence, decoding emerging technologies, customer insights, regulatory shifts, and competitor strategies.
00:00:16: So product teams and strategy leaders don't just react, but shape the future.
00:00:20: Welcome to the deep dive.
00:00:22: You know, every year CES shows us a glimpse of the future, but CES.
00:00:28: Well, it felt different.
00:00:29: It really did.
00:00:30: This wasn't just a collection of cool new gadgets.
00:00:32: It felt like the moment the industry fundamentally pivoted.
00:00:35: In what way?
00:00:36: Well, for years, we've talked about AI as this abstract software layer, right?
00:00:39: Big models, big data, mostly happening up in the cloud.
00:00:42: Right.
00:00:43: This year, AI found its body.
00:00:45: We are officially witnessing the shift from AI as pure hype to AI as a practical physical system.
00:00:51: A system that can reason, plan, and then actually act in the real world.
00:00:54: Exactly.
00:00:55: It's what Jensen Huang called the chat GPT moment for physical AI.
00:00:59: And our sources confirm this wasn't just confined to one industry, it was a seismic shift.
00:01:03: That's really the core of this deep dive.
00:01:05: We saw this convergence across three critical vectors.
00:01:09: Okay,
00:01:09: what are they?
00:01:10: First,
00:01:10: the new compute platforms that actually make agentic AI possible.
00:01:15: Second, the massive, and I mean rapid industrialization of robotics.
00:01:20: And third,
00:01:21: the total convergence of automotive technology around this integrated chip to car intelligence stack.
00:01:27: So today we're going to unpack the most important details for you from each of those fronts.
00:01:31: Okay, let's jump right into the architecture, then the engine that powers all of this theme.
00:01:36: one, AI compute and semiconductor.
00:01:38: Right.
00:01:39: The old wisdom was just bigger models win.
00:01:41: But now, as these models move into the physical world, success is?
00:01:46: Well, it's completely dependent on infrastructure efficiency and scale.
00:01:49: That's absolutely critical.
00:01:50: I mean, if you follow the analysis of experts like Anuj Bharathi, he points out that continuous AI growth just leads to exploding costs.
00:01:58: Unless you have predictable, efficient, industrial scale infrastructure.
00:02:02: Exactly.
00:02:03: And no one is leaning into that harder than Nvidia.
00:02:06: Their announcement of the Vera Rubin platform was, I think, easily the biggest compute news.
00:02:11: Oh, for sure.
00:02:12: And it's not just a single chip succeeding Blackwell.
00:02:14: It's an entire integrated architecture.
00:02:16: It's
00:02:16: six chips designed to function as one cohesive AI supercomputer.
00:02:21: A huge vertically integrated platform cycle.
00:02:24: And it's designed to solve exactly that cost problem.
00:02:27: So I'm your eye.
00:02:28: and Pillimari Shrikant highlighted the key metrics here.
00:02:31: And
00:02:32: what are they?
00:02:32: NVIDIA is promising ten times lower AI costs and five times the inference performance over the last generation.
00:02:39: Ten times lower costs.
00:02:40: That's the stat that should make every CTO.
00:02:43: just sit up and pay attention.
00:02:44: Yeah.
00:02:45: So how does this six chip unity actually deliver that kind of exponential jump?
00:02:50: It's all about modular acceleration and memory throughput.
00:02:54: The platform has the new ARM-based Vera CPU and the next-gen Ruben GPU.
00:02:59: And that GPU introduces HBM-IV memory.
00:03:01: Right.
00:03:02: HBM-IV memory, which has a staggering twenty-two terabytes per second of bandwidth.
00:03:06: Okay, so break that down.
00:03:07: Twenty-two terabytes per second.
00:03:08: Think of it like a superhighway.
00:03:10: You're not just speeding up the cars.
00:03:11: You're dramatically expanding the number of lanes so data can flow between those six chips.
00:03:16: instantly.
00:03:16: So no bottlenecks.
00:03:17: Zero latency communication.
00:03:19: That's what lets these industrial AI systems process and infer massive data sets in real time.
00:03:25: And that vertical integration creates a really powerful moat for NVIDIA.
00:03:30: But the battle isn't just in the data center anymore, is it?
00:03:32: Not at all.
00:03:33: It's heating up fiercely at the edge, especially in the PC space.
00:03:37: And
00:03:37: that's where Intel made a major statement.
00:03:39: They did.
00:03:40: They debuted their core Ultra Series III code named Panther Lake.
00:03:44: This is huge for them because it's the first big release using their brand new AT&A manufacturing process.
00:03:50: We hear the term AT&A process thrown around a lot.
00:03:53: For those of us who aren't semiconductor engineers, what does that actually mean for performance?
00:03:58: It's the ability to shrink the transistors.
00:04:00: You're packing more power into a smaller cooler package.
00:04:03: More
00:04:03: efficiently.
00:04:04: Much more.
00:04:05: On order, Woon and Mark Calderhead were quick to point out that this process, combined with a fifty TOPS NPU, fifty
00:04:11: trillion operations per second, just
00:04:13: for AI tasks.
00:04:15: This is Intel's play for immediate leadership in the AI PC wars, better battery life, faster local AI and a big GPU game.
00:04:22: So Intel is pushing performance right out of the gate.
00:04:25: But Qualcomm is executing a slightly different strategy, right?
00:04:28: Yeah, and Swaptil Chandra Mohantowari noted this.
00:04:31: With their Snapdragon X series chips, they are trying to redefine the mobile-first computing experience.
00:04:36: How so?
00:04:37: By moving sophisticated AI workloads.
00:04:40: You know, things like summarizing long documents, creating complex images from the cloud directly onto the device.
00:04:46: Which
00:04:46: gives you instant performance and crucially better privacy.
00:04:49: Your data never leaves your laptop.
00:04:51: Exactly.
00:04:52: It's a shift to what you might call always-on intelligence right on your device.
00:04:56: And once you have all that sophisticated intelligence decentralized on the chip.
00:05:00: The next logical step is to give that intelligence a body to control.
00:05:04: Which brings us neatly to theme two.
00:05:06: The physical AI revolution.
00:05:08: Right.
00:05:09: Where systems move from passive models to, well, acting agents.
00:05:13: And here we move beyond just computation into the realm of vision-language action models, VLAs.
00:05:18: Right.
00:05:19: Jensen Wong's chat GPT moment for physical AI.
00:05:22: This isn't just about robots looking around.
00:05:24: No, it's about comprehension.
00:05:27: Sivitasia Chigichurla and C.A.
00:05:29: Shailesh Wadawanya explain that VLA is the missing link.
00:05:33: It's not just seeing an object or vision.
00:05:36: And being able to describe it, language.
00:05:38: It's linking those perceptions to complex motor commands, the action that are informed by real-world physics.
00:05:45: It is true physical reasoning.
00:05:47: And the clearest example of this maturing moving from theory to industrial readiness was Boston Dynamics Atlas.
00:05:54: I think everyone's seen those viral videos of it doing backflips in parkour.
00:05:57: Yeah,
00:05:58: but Rupert Brenny pointed out that era is officially over.
00:06:01: So it's not a research stunt anymore.
00:06:03: No, it's an industrial solution.
00:06:05: The new Atlas is fully electric and the specs are all designed for high ROI environments.
00:06:10: It can lift fifty kilograms.
00:06:11: And it's engineered specifically for deployment where that capital cost has to be recouped quickly.
00:06:16: They're focused on cognitive uptime.
00:06:18: The most impressive detail for me was the automatic battery exchange feature.
00:06:22: Meaning it can operate nearly two hundred and forty seven in a factory.
00:06:25: That tells you they are solving real industrial problems, not just showing off dexterity.
00:06:30: And, just like in the compute space, NVIDIA wants to own the infrastructure for this entire revolution.
00:06:35: Of course.
00:06:36: Akshay A described their strategy as building the Android of generalist robotics.
00:06:42: The Android, okay, so an open ecosystem for anyone building a physical AI system.
00:06:46: Exactly.
00:06:47: But if they're building the Android, which is generally open, where's the lock in for them?
00:06:51: Are they just giving it all away?
00:06:53: The complexity is the lock-in.
00:06:54: They aren't selling the robots.
00:06:55: They're selling the training ground and the brains.
00:06:57: Ah, so the ecosystem.
00:06:58: That's it.
00:06:59: It includes Cosmos, which is a world foundation model for physics accurate simulation, and the GR-zero-zero-T N-one point six open
00:07:07: model.
00:07:08: So they provide the perfect, safe, sandbox cosmos to train your robot with synthetic data before it ever touches a real factory floor.
00:07:17: And we're seeing huge adoption already.
00:07:19: Uber Eats, Caterpillar, even Boston Dynamics itself is using these tools.
00:07:23: That's the key to scale.
00:07:24: But pushing AI this deep into machinery means pushing intelligence deep into the microchip.
00:07:30: And Akash Dolis highlighted NXP Semiconductor's EIQ agentic AI framework here.
00:07:36: What's unique about that framework?
00:07:38: It moves multi-agent orchestration, where different software agents coordinate on a task directly onto the silicon.
00:07:45: Onto smaller MCUs and MPUs,
00:07:48: the benefit being
00:07:49: zero latency and privacy by design.
00:07:52: In industrial monitoring or a health assistant, you can't afford a few milliseconds of lag waiting for the cloud.
00:07:58: Decisions have to be immediate and local.
00:08:00: From the factory floor, let's transition to the road, because theme three is all about automotive and mobility converging around these chip-to-car stacks.
00:08:09: And the transition here is profound.
00:08:11: We're moving from vehicles defined by software to vehicles defined by intelligence.
00:08:15: Vivek Jindal captured this really concisely, I thought.
00:08:17: He said vehicles are transitioning from software-defined SDVs
00:08:21: to AI-defined vehicles.
00:08:23: AVs.
00:08:24: The differentiation isn't the engine anymore.
00:08:26: It's the sophistication and safety of the onboard AI.
00:08:30: And the biggest announcement here was about solving the toughest problem in autonomy.
00:08:34: The long tail scenarios.
00:08:35: Right.
00:08:35: Reasoning base autonomy.
00:08:37: Traditional systems are great at pattern matching.
00:08:40: If they've seen a scenario a thousand times, they handle it perfectly.
00:08:43: But the long tail is that rare, dangerous, or just plain weird event.
00:08:48: The piano falling off a truck.
00:08:50: A deer stepping out at dusk.
00:08:52: an unusual construction zone.
00:08:54: And that's where pattern matching fails.
00:08:55: It does.
00:08:56: So, NVIDIA Unveiled Alpamayo, their open-source VLA model family for vehicles, and Garav Dugal and K. P. Menoj emphasized how this lets the car exhibit reasoning.
00:09:08: Chain of thought decision-making.
00:09:09: The car can essentially explain why it's braking or swerving, which makes the system safer and crucially more auditable.
00:09:15: That interpretability is key for building trust with regulators and consumers.
00:09:19: And this isn't just a future thing.
00:09:21: No, it's deploying immediately.
00:09:23: Mercedes-Benz CEO Ola Kalenius announced the CLA model is launching in Q one with MB Drive Assist Pro,
00:09:30: which is level two point to point assistance powered by this exact technology.
00:09:34: And that infrastructure is clearly winning over other major players.
00:09:37: Lucid Motors, as highlighted by Linux device driver developer, is rolling out L-II++ on its gravity SUV via over-the-air updates.
00:09:46: Using the NVIDIA platform.
00:09:47: An acceleration of capability pushed directly to the consumer.
00:09:51: And while NVIDIA is dominant, this convergence trend is everywhere.
00:09:55: We saw ecosystem players like ZF and Qualcomm collaborating on ADS systems.
00:10:00: Yeah,
00:10:00: pairing ZF's Pro AI with the Snapdragon ride platform.
00:10:04: And then you have Sony HANA Mobilities' Afida-One EV.
00:10:07: Hershita Bell's analysis confirmed its positioning as software-first.
00:10:11: Late, twenty-twenty-sixth delivery, around eighty-nine, nine hundred dollars.
00:10:15: Its whole selling point is the Afila personal agent, blending entertainment and autonomy.
00:10:20: This race to unify the tech stack is even driving corporate strategy.
00:10:24: Lucas Beach detail how Hyundai is reacting.
00:10:27: It's a fascinating move.
00:10:28: They're unifying R&D under Manfred Hare and prioritizing physical AI and robotics in their factories.
00:10:34: So it's a hedge against the EV market stagnating.
00:10:36: I think so.
00:10:37: If the consumer EV market slows down, you need flexibility.
00:10:40: By investing in physical AI in the manufacturing line, you beat the competition on cost and efficiency, regardless of short-term sales.
00:10:47: So they're using AI not just to drive the car, but to build it better.
00:10:51: Exactly.
00:10:52: And that strategic urgency is palpable.
00:10:55: Which I think is a good time to shift gears one last time and look at theme four.
00:10:58: Egentic AI and consumer tech.
00:11:01: health and play.
00:11:03: Where the AI is becoming ambient and in some cases truly invisible.
00:11:07: Yes, I swore of.
00:11:08: Singaporean pointed out that the goal now is for AI to just fade into the background.
00:11:13: You shouldn't have to consciously turn on.
00:11:15: the AI.
00:11:16: It should just be operating, anticipating your needs.
00:11:19: And in health tech, that shift from reaction to anticipation is a profound change.
00:11:24: Absolutely.
00:11:25: Mikwin Schanwag detailed Abbott's Libre Assist.
00:11:28: This is an AI layer for their continuous glucose monitoring.
00:11:31: Typically, a CGM helps diabetics react after a glucose spike.
00:11:34: Right,
00:11:35: but this AI uses historical data to predict the spike before a meal.
00:11:39: It
00:11:40: shifts the whole care paradigm from management to prevention.
00:11:43: It's not just diagnosing.
00:11:44: It's projecting future states.
00:11:46: We're also seeing new sensor hardware feeding these AIs.
00:11:49: JNU mentioned SkinSight, which they called Electronic Skin.
00:11:53: It uses wearable sensor patches in AI to track real-time aging signals.
00:11:57: It's moving beauty and health tech toward actual longevity science.
00:12:00: Beyond health, we're seeing new hardware form factors designed just to leverage these agents.
00:12:06: Prabhu Ram highlighted Samsung's Galaxy Z tri-fold.
00:12:09: The dual hinge design encourages a tablet-first, multi-tasking experience with their DEX desktop environment.
00:12:16: It's a recognition that if AI agents are managing complex workflows, we need the screen real estate to interact with them.
00:12:22: But there's also a growing anti-screen sentiment.
00:12:25: Drificula Thea discussed the rumors of the Joanie Ive and OpenAI Gumdrop,
00:12:29: an audio-first AI pin
00:12:31: designed to cure screen overload.
00:12:33: It blends handwriting and voice to bypass the screen for quick tasks.
00:12:37: The future isn't always bigger screens.
00:12:39: Speaking of minimizing complexity, my personal favorite consumer announcement, the one that perfectly illustrated this shift, was the LEGO Smart Bricks.
00:12:46: It was brilliant.
00:12:47: They're embedding a tiny, four point one millimeter ASIC chip in the bricks.
00:12:52: And using what they call BrickNet.
00:12:54: Maya Meller emphasized this is a clever repurposing of quiet supply chain tech.
00:12:59: It sounds fancy, but Bricknet is basically using RFID principles.
00:13:03: What logistics companies use to track pallets and warehouses.
00:13:06: And applying it to a toy.
00:13:07: Precisely.
00:13:08: It lets the system instantly recognize physical actions.
00:13:12: You tilt a block.
00:13:13: You hear a Star Wars sound, you connect two blocks, a digital effect happens.
00:13:17: As Mayor Bellani pointed out, it creates this seamless, real-time feedback loop between physical play and the digital experience.
00:13:24: Without any complex pairing.
00:13:26: It's invisible infrastructure, creating visible fun.
00:13:29: So if we pull back and summarize the macro trend, CES-twenty-twenty-six confirms that AI is officially infrastructure now.
00:13:37: It's no longer a feature you toggle, it's baked into the engine, the chassis, the factory, the medical device.
00:13:42: And that convergence, especially as agentic AI moves into the physical world, introduces a massive operational tension.
00:13:49: Yes, our upshot of ADNMEL story articulated this perfectly.
00:13:53: It's the friction between governance and speed.
00:13:55: When an AI in a par or a robot in a warehouse executes a complex life-altering decision.
00:14:01: Liability stops being a philosophical debate.
00:14:03: It becomes an operational and legal nightmare.
00:14:06: You have to govern the agent, but you can't slow down innovation.
00:14:09: The
00:14:09: legal safety nets, the auditing, the insurance models, they're all lagging far behind the technology.
00:14:14: And
00:14:14: that legal and safety framework is the next critical hurdle the industry has to clear.
00:14:19: So if NVIDIA is positioning itself as the Android of generalist robotics, creating the platform for action, which company, or maybe which consortium, is best placed to develop the equivalent of the safety and accountability layer for this new physical AI operating system?
00:14:35: That's the question we'll leave you with.
00:14:37: If you enjoyed this deep dive, new episodes drop every two weeks.
00:14:40: Also check out our other editions on cloud, defense tech, digital products and services, artificial intelligence, ICT and tech insights, sustainability and green ICT, defense tech and health
00:14:50: tech.
00:14:50: Thank you for joining us as we unpack the physical pivot signal by CES, twenty twenty six.
00:14:56: Be sure to subscribe so you don't miss our next deep dive.
New comment