Nvidia made a series of bold announcements at Computex 2025 that underline its ambition to dominate both the AI infrastructure and robotics sectors, even as market turbulence and geopolitical headwinds challenge the chipmaker’s momentum.
The company unveiled new server systems, robotics platforms, and global partnerships designed to position Nvidia at the core of AI’s next frontier. Central to this strategy is a major push into “physical AI,” CEO Jensen Huang’s term for humanoid robotics powered by advanced simulation and machine learning.
From Silicon to Steel: Nvidia Bets Big on Robots
At the event, Nvidia introduced the latest version of its robotics AI foundation model, Isaac GR00T N1.5, and GR00T-Dreams, a new synthetic data engine that simulates motion and behavioral environments to help robots learn in virtual settings before real-world deployment. The model is designed to enable robots to adapt and interact with complex physical environments, with early adopters including Boston Dynamics, Agility Robotics, Foxconn, and XPENG Robotics.
“Physical AI is the world’s next trillion-dollar industry,” said Huang. “We’re building the infrastructure to power intelligent machines — first in factories, and eventually in homes.”
The robotics platform is part of Nvidia’s broader Isaac ecosystem, which offers simulation, training, and deployment tools for the next generation of humanoid and mobile robots.
Modular AI Infrastructure: NVLink Fusion and RTX Pro Servers
Nvidia also revealed NVLink Fusion, a semicustom infrastructure platform that allows customers to build bespoke server systems by combining Nvidia GPUs with third-party CPUs and AI chips. The flexible architecture is aimed at hyperscalers, national cloud projects, and enterprises looking to customize their AI compute infrastructure.
At the same time, Nvidia introduced the RTX Pro Blackwell servers, featuring its new Blackwell Server Edition GPUs. These machines are optimized to handle “virtually every enterprise workload,” from design and simulation software transition from CPU-centric to GPU-accelerated computing, improving efficiency and scalability.
Global Reach: Cloud Expansion and Strategic Deals
Expanding its footprint, Nvidia debuted DGX Cloud Lepton, a lightweight, cloud-based AI platform developed in collaboration with partners including CoreWeave, Foxconn, and SoftBank. The service allows customers to build, train, and deploy their own AI models across a global GPU cloud network.
NVidia also complete a substantial treaty with Humain, a Saudi Arabian AI startup aided by the kingdom’s financial help. Over the next five years, Nvidia will supply hundreds of thousands of AI chips to the company, marking a significant investment in the Middle East’s emerging AI ecosystem.
In Taiwan, the company will build a new AI supercomputer with Foxconn and the government, leveraging 10,000 Blackwell GPUs, and open Nvidia Constellation, a regional AI innovation center in Taipei.
Navigating a Choppy Market
The announcements come amid a period of uncertainty for Nvidia. Concerns over U.S. export controls restricting AI chip shipments to China, the potential for new semiconductor tariffs, and reports of Microsoft canceling data center leases have all weighed on investor sentiment.
The company’s stock is currently flat for the year and down more than 4% over the past six months, though it remains up 43% year-over-year — a reflection of investor faith in Nvidia’s long-term AI strategy despite short-term volatility.
Still, with a rapidly expanding product portfolio and deep integration across robotics, infrastructure, and cloud AI, Nvidia is signaling that it intends to remain at the center of the AI revolution — both virtual and physical.