SuperX has announced the release of the SuperX XN9160-B200 AI Server, its newest flagship product. This next-generation AI server is designed to satisfy the growing need for scalable, high-performance computing in tasks related to AI training, machine learning (ML), and high-performance computing (HPC). It is powered by NVIDIA’s Blackwell architecture GPU (B200).
With the XN9160-B200 AI Server, large-scale distributed AI training and inference workloads may be accelerated. For training and inferring foundation models using reinforcement learning (RL) and distillation techniques, multimodal model training and inference, and HPC applications like climate modeling, drug discovery, seismic analysis, and insurance risk modeling, it is optimized for GPU-supported tasks to support intensive GPU instances. Its performance is comparable to that of a conventional supercomputer, providing enterprise-level capabilities in a small package.
The SuperX XN9160-B200 AI server, which delivers potent GPU instances and computational capabilities to speed global AI research, is a major milestone in SuperX’s AI infrastructure strategy.
XN9160-B200 AI System
The brand-new XN9160-B200 would unleash extraordinary AI computing capability in a 10U chassis with its 8 NVIDIA Blackwell B200 GPUs, 5th generation NVLink technology, 1440 GB of high-bandwidth memory (HBM3E), and 6th Gen Intel Xeon CPUs.
With eight NVIDIA Blackwell B200 GPUs and fifth-generation NVLink technology, the SuperX XN9160-B200’s core engine can deliver ultra-high inter-GPU bandwidth of up to 1.8TB/s. This dramatically reduces the R&D cycle for activities like pre-training and fine-tuning trillion-parameter models and speeds up large-scale AI model training by up to three times. With 1440GB of high-performance HBM3E memory operating at FP8 accuracy, it provides an incredible throughput of 58 tokens per second per card on the GPT-MoE 1.8T model, which is a quantum increase in performance for inference. There is a 15x boost in performance compared to the previous generation H100 platform’s 3.5 tokens per second.
All-flash NVMe storage, 5,600–8,000 MT/s DDR5 memory, and 6th Gen Intel® Xeon® CPUs are essential components that power the system. AI model training and inference activities may be completed steadily and effectively thanks to these components, which also efficiently speed up data pre-processing, guarantee seamless operation in high-load virtualization scenarios, and improve the effectiveness of sophisticated parallel computing.
Powering AI Without Interruption
An innovative multi-path power redundancy technology is used by the XN9160-B200 to provide outstanding operating dependability. With its 1+1 redundant 12V power supplies and 4+4 redundant 54V GPU power supplies, it significantly reduces the possibility of single points of failure and guarantees that the system can function steadily and continuously even in the face of unforeseen events, supplying power for crucial AI missions without interruption.
A built-in AST2600 intelligent management system on the SuperX XN9160-B200 would enables easy remote monitoring and control. In addition to other manufacturing quality control procedures, each server is put through more than 48 hours of full-load stress testing, cold and hot boot validation, and high/low temperature aging screening to guarantee dependable delivery. Additionally, SuperX, a company from Singapore, offers a full-lifecycle service guarantee, a three-year warranty, and expert technical support to help businesses manage the AI wave.