NVIDIA is bringing its latest RTX PRO 6000 Blackwell Server Edition GPU to the mainstream enterprise server market through new 2U rack-mounted systems, a move aimed at accelerating the shift from traditional CPU-based infrastructure to high-performance, GPU-driven computing.
The hardware will be available from major global system partners including Cisco, Dell Technologies, HPE, Lenovo, and Supermicro, and will target a wide range of enterprise workloads spanning AI, graphics, simulation, and data analytics.
The RTX PRO 6000 Blackwell architecture is designed to deliver a significant leap in performance and efficiency. According to NVIDIA, these systems can offer up to 45 times better performance and 18 times higher energy efficiency compared to CPU-only 2U servers, promising a lower total cost of ownership. This positions them as a compelling option for organizations looking to modernize their data centers without expanding physical space or power requirements.
NVIDIA founder and CEO Jensen Huang described the transition as part of a broader computing evolution. “AI is reinventing computing for the first time in 60 years – what started in the cloud is now transforming the architecture of on-premises data centers,” said Jensen Huang. “With the world’s leading server providers, we’re making NVIDIA Blackwell RTX PRO Servers the standard platform for enterprise and industrial AI.”
The new 2U mainstream servers expand NVIDIA’s RTX PRO Server lineup announced earlier this year at COMPUTEX, which already includes configurations supporting two, four, or eight RTX PRO 6000 GPUs. These systems form the backbone for the NVIDIA AI Data Platform, a reference design for building AI-ready storage systems. Dell, for example, is integrating this design into its Dell AI Data Platform and PowerEdge R7725 servers, pairing two RTX PRO 6000 GPUs with NVIDIA AI Enterprise software and networking solutions.
Blackwell-based RTX PRO Servers are built for a wide spectrum of use cases. They incorporate NVIDIA’s fifth-generation Tensor Cores and second-generation Transformer Engine, with FP4 precision support that delivers up to six times faster inference performance compared to the previous-generation NVIDIA L40S GPU. The fourth-generation RTX graphics technology enables photorealistic rendering at up to four times the speed of the L40S, while NVIDIA Multi-Instance GPU technology allows for secure, multi-user deployments with up to four isolated instances per GPU.
Beyond traditional enterprise AI, the systems are optimized for “physical AI” workloads such as robotics, industrial simulation, and digital twins. Leveraging NVIDIA Omniverse libraries and Cosmos world foundation models, these servers can accelerate simulation and synthetic data generation workflows by up to four times over L40S-based systems. They also integrate with NVIDIA Metropolis blueprints for video search, summarization, and other vision-language applications to enhance safety, security, and productivity in industrial environments.
Blackwell’s Expansive Ecosystem
For AI agents and reasoning models, NVIDIA notes that RTX PRO Servers deliver price-performance gains. Models like the newly announced Llama Nemotron Super can achieve up to three times better price-performance with NVFP4 precision on a single RTX PRO 6000 compared to FP8 on NVIDIA’s H100 GPUs, enabling more accurate reasoning at lower costs. All RTX PRO Servers are certified for NVIDIA AI Enterprise, the company’s software suite for accelerated and secure AI development and deployment.
The ecosystem surrounding Blackwell is extensive, drawing on NVIDIA CUDA-X libraries, over 6 million developers, and nearly 6,000 GPU-accelerated applications. This foundation allows enterprises to scale workloads across thousands of GPUs while optimizing for energy efficiency and total operational cost.
Global OEM availability will be broad. Alongside the major providers, companies such as Advantech, ASUS, GIGABYTE, MSI, QCT, Wistron, and Wiwynn will bring RTX PRO Servers to market. While 4U systems with eight GPUs are already shipping, the 2U mainstream models are expected later this year, targeting enterprises seeking compact yet high-performance solutions for AI and accelerated computing.
With this release, NVIDIA and its partners are positioning the RTX PRO 6000 Blackwell platform as a standard for next-generation enterprise infrastructure, bridging the performance gap between cloud AI capabilities and on-premises deployment. By combining compute density, efficiency, and a robust software ecosystem, the servers are intended to meet the demands of increasingly AI-driven enterprise operations – from autonomous agents to complex industrial simulations – while providing a path for organizations to evolve their data centers for the decade ahead.