CX564P-N
64 x 100G QSFP28 Data Center Low Latency Leaf/Spine Switch Enterprise SONiC NOS Installed
-
Powered by Asterfusion’s Enterprise SONiC Distribution
-
Industry-leading ultra-low latency: 500ns with Teralynx 7
-
Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
-
Purpose-built for AI, ML, HPC, and Data Center applications
100GbE Data Center Switch
Enterprise SONiC 64-Port 100G QSFP28 Data Center Leaf/Spine Switch Low Latency
The Asterfusion CX564-N is a powerhouse 64x100G QSFP28 data center switch, engineered for leaf and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 6.4 Tbps, coupled with ultra-low latency and unmatched packet processing power.
With latency slashed to an incredible 500 nanoseconds, the CX564-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.
Enterprise SONiC 64-Port 100G QSFP28 Dater Center Leaf/Spine Switch Low Latency
With latency slashed to an incredible 500 nanoseconds, the CX564-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.
500ns
7600Mpps
Packet Forwarding Rate
70MB
Packet Buffer
KEY POINTS
Key Facts
500ns
7600Mpps
Packet Forwarding Rate
70MB
Packet Buffer
Optimized Software
for Advanced Workloads
– AsterNOS
Optimized Software for Advanced Workloads – AsterNOS
Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX564-N doesn’t just perform — it outperforms, leaving competing products in the dust.
Virtualization
VXLAN
BGP EVPN
EVPN Multihoming
Zero Packet Loss
RoCEv2 for AI/ML, HPC, Storage
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP
High Reliability
ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF
Automated O&M
Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring
Virtualization
BGP EVPN
EVPN Multihoming
High Reliability
MC-LAG
BFD for BGP & OSPF
Zero Packet Loss
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP
Automated O&M
In-band Network Telemetry
SPAN/ERSPAN Monitoring
Specifications
|
Quality Certifications






Features
Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

6.4 Tbps
Switching Capacity
7600 Mpps
Packet Forwarding Rate
500ns
End-to-end Latency
70 MB
Buffer Size
6.4 Tbps
Switching
Capacity
7600 Mpps
Packet
Forwarding Rate
500ns
End-to-end
Latency
70 MB
Buffer
Size
Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion Network Monitoring & Visualization

O&M in Minutes, Not Days

Comparison of Data Center Switches
|
Panel


What’s in box

FAQs
All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.
We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~
Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.