Crusoe Energy, a pioneer in vertically integrated AI infrastructure, has announced a strategic partnership with AMD to bring new AI-focused cloud solutions to market. The collaboration will integrate AMD’s high-performance Instinct MI355X GPUs into Crusoe’s purpose-built AI cloud platform, creating a scalable and robust offering aimed at organizations developing advanced AI models.
This move responds to the surging demand for reliable, high-throughput infrastructure capable of handling the increasing complexity of AI workloads, including model training, fine-tuning, and inference.
By deploying the MI355X GPUs across its global network of data centers, Crusoe seeks to provide enterprises and startups with greater computational power, energy efficiency, and flexibility. The companies will also coordinate joint sales and marketing initiatives to bring this offering to a broader range of commercial clients.
Diverse Demands of Modern AI
Crusoe CEO and co-founder Chase Lochmiller emphasized the alignment of the partnership with the company’s mission. “Crusoe is thrilled to collaborate with AMD in our mission to develop the most cutting-edge AI infrastructure globally. By integrating AMD’s MI355X GPUs into our AI cloud platform, we’re empowering organizations worldwide to accelerate their AI initiatives with exceptional reliability and performance.”
From AMD’s perspective, the collaboration is part of its broader push to support transformative AI workloads. “AMD is the trusted AI solutions provider of choice, enabling customers to tackle the most ambitious AI initiatives,” said Negin Oliver, corporate vice president of business development for AMD’s Data Center GPU division. “Our Instinct MI350 series GPUs, paired with AMD’s ROCm software, provide the performance, adaptability, and security required to meet the diverse demands of modern AI.”
The launch of Crusoe’s cloud service incorporating the AMD Instinct MI355X is expected in the fall, offering an alternative infrastructure option for organizations requiring powerful, AI-optimized compute resources.