Skip to main content

AI Storage Network

Storage That Keeps Pace with AI:
Ultra-Low Latency, Lossless RoCE, Elevated +10% IOPS
Ultra-Fast, Congestion-Free, Ready for AI

AI Storage Network

Highest-performance AI Ethernet fabric
Better than InfiniBand
  
  
  

Ultra-Low Latency Lossless RoCE Network accelerates storage protocols such as NVMeoF, Ceph, and distributed file systems like Lustre or HDFS, by reducing read/write latency and boosting IOPS. Delivers 5–10% performance improvement compared to InfiniBand.

Ultra-low-latency

Network

Lossless RoCE

Network

↑ 5~10%

IOPS Beyond IB

The Value of AI Storage Network

AI Storage Network Topology

AI Storage Network Topology

  • Improve IOPS 
    Read performance increased by 3-6%, and write performance increased by 6-10%.
  • Reduce I/O Latency 
    Read latency is reduced by 3–6%, and write latency decreases by up to 10%.
  • Lossless Transport 
    RoCEv2 with PFC, ECN, and DCBX ensures zero packet loss, delivering high throughput for storage traffic.
  • Congestion Free 
    INT-driven Adaptive Routing measures real-time link utilization and dynamically bypasses congested paths to minimize tail latency for storage workloads.

Traffic

RoCE + DiffServ enables the convergence of diverse storage traffic on a single physical network — including model and dataset loading, RAG vector database access, logging, and input/output backup. It supports protocols such as NVMe-oF, Ceph, BeeGFS, Lustre, S3, NFS, and HDFS, ensuring lossless delivery for critical flows while maintaining high throughput across the system.

Protocol
RDMA Support
Latency Sensitivity
Throughput Demand
Typical Use Case
NVMe-oF
Yes
High
High
Fast block storage for AI/ML
training/inference
Ceph
Optional
Medium
High
Distributed object storage for
large datasets
BeeGFS
Optional
Medium
High
HPC parallel file system for
performance
Lustre
Yes
High
High
HPC and AI parallel I/O
workloads
S3
No
Low
Medium
Cloud-native object storage
(e.g., backups)
NFS
No
Low
Medium
General purpose
file access
HDFS
No
Low
High
Big Data processing
(e.g., Hadoop)

Test Result

Using the ultra-low-latency CX-532P-N RoCE switch, two compute nodes and two NVMe-oF storage nodes were interconnected via ConnectX-5 adapters. Storage performance was evaluated with vdbench 5.04.06 and fio 2.1.10 (4KB/8KB random workloads), and results were compared against a Mellanox SB7700 InfiniBand switch.

AI Storage Network read-write-latency

In single compute–storage node latency tests, the RoCE switch reduced read latency by up to 6.3% and write latency by up to 10.1% compared with the InfiniBand switch.

AI Storage Network read-wirte-iops

In multi-flow concurrent IOPS tests between two compute nodes and two storage nodes, read IOPS improved by up to 6.1% and write IOPS improved by up to 10.5%.