Skip to main content

Low Latency Data Center Switch

Asterfusion offers 2 Tbps -51.2Tbps data centers’ leaf and spine switch based on Teralynx and Prestera (Falcon) chips, it is preloaded with SONiC enterprise distribution which provides users a simple, plug-and-play deployment and turnkey solution.

Why Asterfusion Enterprise SONiC Distribution based
Data Center Switches?

Why Asterfusion Enterprise SONiC Distribution based
Data Center Switches?

Asterfusion CX-N switches offer a commercial-grade, SONiC-based NOS with enhanced features and usability over the community version.
  
The Asterfusion CX-N series achieves ultra-low latency of around 500ns in most application scenarios, delivering exceptional performance.
  
As an emerging white box leader, Asterfusion delivers high-performance products at highly competitive prices—helping you cut both CapEx and OpEx.
  
Provides a full range of SONiC switches from 10G to 800G with enhanced features beyond the community version, ensuring smooth operation without compatibility concerns.
Asterfusion CX-N switches offer a commercial-grade, SONiC-based NOS with enhanced features and usability over the community version.
  
The Asterfusion CX-N series achieves ultra-low latency of around 500ns in most application scenarios, delivering exceptional performance.
  
As an emerging white box leader, Asterfusion delivers high-performance products at highly competitive prices—helping you cut both CapEx and OpEx.
  
Provides a full range of SONiC switches from 10G to 800G with enhanced features beyond the community version, ensuring smooth operation without compatibility concerns.

Asterfusion Enterprise SONiC NOS Based Low Latency Data Center Switches from 2Tbps to 51.2Tbps

Asterfusion Enterprise SONiC NOS Based Low Latency Data Center Switches from 2Tbps to 51.2Tbps

Asterfusion provides robust and reliable SONiC-based solutions, meticulously tested and hardened to guarantee seamless deployment across our full portfolio of open networking data center switches. Our commercial offering — AsterNOS — enhances the SONiC experience with richer features, intuitive usability, and enterprise-grade stability, going far beyond the capabilities of the community version. Whether you’re building AI/ML training clusters, scaling HPC environments, or architecting RoCE/NVMe-based lossless data center fabrics, Asterfusion delivers the high-performance switching foundation you need — with ultra-low latency, high throughput, and intelligent congestion control built in.

Data Center Infrastructure Architecture

Data Center Infrastructure Architecture

Asterfusion Leaf and Spine Topology for Data Center or CORD
  • Manages and optimizes the communication of parallel computation workloads across GPU nodes during Al/ML inference tasks, including KV caches,activations, and tensor slices
  • Connects only to inference GPU nodes, operating in complete isolation from other networks
  • Fully supports RoCE, ensuring lossless transmission of xCCL(e.g.,NVIDIA’s NCCL) and MPI traffic
  • Product portfolio: The switch can be either CX864E-N or CX732Q-N.
Asterfusion Leaf and Spine Topology for Data Center or CORD
  • Provides storage and retrieval of machine learning models, input/output data, intermediate results, and logs, enabling data access and management for GPU-based inference workloads.
  • Connects GPU nodes and storage nodes
  • Fully supports RoCE, ensuring lossless transmission of GPU-Direct RDMA, Ceph, and related traffic.
  • Product portfolio:  CX664D-N 
Asterfusion Leaf and Spine Topology for Data Center or CORD
  • Orchestrates the distribution of user requests to inference nodes and delivers results,Supporting multimodal input and output
  • Connects to APl Gateway, Load Balancer,and GPU nodes
  • Fully supports BGP and VXLAN EVPN to realize multi-tenant isolation, leveraging ECMP and BGP multihoming for high reliability
  • Product portfolio: CX564P-N CX532P-NCX308P-48Y-N-V2 .
Asterfusion Parallel Inference Network
  • Manages parallel computation traffic across GPU nodes(e.g., KV cache, activations)
  • Connects only to inference GPU nodes, fully isolated from other networks.
  • Product portfolio: CX864E-N or CX732Q-N
Asterfusion AI Storage Network
  • Provides storage and retrieval of models, input/output data, logs, etc., for GPUs.
  • Connects GPU nodes and storage nodes
  • Product portfolio: CX664D-N
Asterfusion Parallel Inference Network
  • Handles distribution of user requests and returns result.
  • Connects to APl Gateway, Load Balancer,and GPU nodes.
  • Product portfolio: CX564P-N, CX532P-N , CX308P-48Y-N-V2.
  • Manages CPUs,GPUs, storage,and network devices for scheduling, orchestration, and configuration.
  • Connects CPUs, GPUs, storage nodes, and network devices.
  • Product portfolio: CX306P-48Y-M, CX206Y-48GT-M-H or other products of the same category
  • Collects telemetry data from CPUs, GPUs,storage, and network devices for monitoring.
  • Connects CPUs, GPUs, storage nodes, and network devices.
  • Product portfolio: CX532P-N, CX308P-48Y-N-V2
  • Handles distributed storage system functions like sharding, replication, backup, and consistency.
  • Connects only to storage nodes, fully isolated from other networks.
  • Product portfolio:  CX664D-N