CX664D-N
64-Port 200G QSFP56 Low Latency Leaf Spine Switch for AI/ML/Data Center, Enterprise SONiC
-
Powered by Asterfusion’s Enterprise SONiC Distribution
-
Industry-leading ultra-low latency: 500ns with Teralynx 7
-
Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
-
Purpose-built for AI, ML, HPC, and Data Center applications
200GbE Data Center Switch
Enterprise SONiC 64-Port 200G QSFP56 Data Center Leaf/Spine Switch Low Latency
With latency slashed to a remarkable 500 ns, the CX664P-N is purpose-built for the most demanding, latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, consistent high-throughput performance even during intense bursts of small-packet traffic.
Enterprise SONiC 64-Port 200G QSFP56 Dater Center Leaf/Spine Switch Low Latency
The Asterfusion CX664P-N is a high-density 64-port 200G QSFP56 data center switch engineered specifically for spine architectures. Powered by the blazing-fast Marvell Teralynx 7 ASIC, it delivers line-rate L2/L3 switching performance up to an impressive 12.8 Tbps, combined with ultra-low latency and unparalleled packet processing capabilities.
With latency slashed to a remarkable 500 nanoseconds, the CX664P-N is purpose-built for the most demanding, latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, consistent high-throughput performance even during intense bursts of small-packet traffic.
500ns
7600Mpps
Packet Forwarding Rate
70MB
Packet Buffer
KEY POINTS
Key Facts
500ns
7600Mpps
Packet Forwarding Rate
70MB
Packet Buffer
Optimized Software
for Advanced Workloads
– AsterNOS
Optimized Software for Advanced Workloads – AsterNOS
Automated O&M
Python and Ansible, ZTP
In-band Network Telemetry
SPAN / ERSPAN Monitoring
High Reliability
ECMP/ Elastic Hash
MC-LAG
BFD for BGP and OSPF
Zero Pacekt
RoCEv2
PFC/ PFC Watchdog / ECN
QCN / DCQCN / DCTCP
Virtualization
VXLAN
BGP EVPN
EVPN Multihoming
Automated O&M
Python and Ansible, ZTP
In-band Network Telemetry
SPAN / ERSPAN Monitoring
High Reliability
ECMP/ Elastic Hash
MC-LAG
BFD for BGP and OSPF
Zero Pacekt
RoCEv2
PFC/ PFC Watchdog / ECN
QCN / DCQCN / DCTCP
Virtualization
VXLAN
BGP EVPN
EVPN Multihoming
Specifications
|
Quality Certifications






Features
Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

12.8 Tbps
Switching Capacity
500ns
End-to-end Latency
70 MB
Buffer Size
12.8 Tbps
Switching Capacity
500ns
Ultra Low Latency
70 MB
Buffer
Size
Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion Network Monitoring & Visualization

O&M in Minutes, Not Days

Comparison of Data Center Switches
|
Panel


What’s in box

FAQs
All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.
We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~
Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.