HPE Unveils Next-Gen Cray Platform to Power Scalable AI & HPC Workloads

Hewlett Packard Enterprise (HPE) is pushing the boundaries of high-performance computing once again with its newly enhanced HPE Cray platform,...
AI HPC platform

Hewlett Packard Enterprise (HPE) is pushing the boundaries of high-performance computing once again with its newly enhanced HPE Cray platform, now equipped with next-generation DAOS (Distributed Asynchronous Object Storage) configurations. Designed for the accelerating demands of AI and HPC workloads, the updated platform promises unprecedented scalability, faster data movement, and flexible storage options built for enterprises training massive AI models. 

As global organizations move deeper into generative AI, simulation-driven research, and large-scale analytics, traditional storage is becoming a bottleneck. HPE’s new Cray architecture directly addresses this challenge by delivering NVMe-first performance, high memory bandwidth, and a modular design that scales smoothly from small clusters to exascale-ready systems. 

Flexible DAOS Configurations for Every AI Use Case 

One of the biggest highlights of the announcement is HPE’s expanded support for configurable DAOS storage, now offering multiple NVMe and DRAM combinations tailored for different performance tiers. This means organizations can build storage systems optimized for: 

  • High-throughput AI training 
  • Low-latency inference pipelines 
  • Large scientific simulations requiring petabyte-scale datasets 
  • Mixed workloads where compute and storage need synchronized scaling 

DAOS, originally developed by Intel and widely adopted by the HPC community, is known for delivering extreme performance and parallel access speeds. By integrating DAOS deeply into the Cray platform, HPE enables researchers and enterprises to bypass the limitations of legacy file systems, achieving significantly faster I/O, resilient data integrity, and near-linear scaling. 

Powering the Era of AI-Driven Discovery 

What makes this update especially important is the rising need for high-performance data storage in modern AI architectures. Training large language models, running climate simulations, or automating engineering workflows all require systems that can ingest, read, and process data at enormous speeds. 

The new HPE Cray platform integrates: 

  • High-density NVMe pools for rapid parallel read/write 
  • Memory-rich nodes using high-speed DRAM 
  • Optional persistent memory layers for resilience and real-time recovery 
  • Software-defined automation that balances workloads intelligently 

Together, these enhancements help eliminate bottlenecks, ensuring that GPUs, TPUs, and specialized AI accelerators always stay fed with data. 

Built for Research Labs, Enterprises & AI Factories 

HPE has clearly positioned the Cray ecosystem as the backbone for next-gen AI factories — large-scale data and model pipelines that companies are building to train custom generative AI. 

The platform is designed to meet the needs of: 

  • National labs running mission-critical scientific simulations 
  • Enterprises building proprietary LLMs 
  • Universities conducting large-scale distributed training 
  • Aerospace, energy, and automotive firms relying on digital twins and simulation-based engineering 

With high-density compute blades, energy-optimized cooling technologies, and intelligent orchestration, HPE ensures the platform remains efficient even at massive scale. 

Future-Ready, Modular & Exascale Capable 

HPE’s evolution of the Cray line makes one thing clear: the future of AI and HPC requires scalable, modular, and ultra-fast storage architecture. By embracing DAOS and next-generation NVMe standards, HPE creates a storage layer capable of keeping up with the most demanding AI workloads for years to come. 

For enterprises preparing to scale their AI infrastructure, the new HPE Cray platform offers a future-proof foundation — delivering the flexibility of cloud architectures with the performance of on-prem supercomputing. 

You May Also Like