Skip to main content

CX564P-N

64 x 100G QSFP28 Data Center Low Latency Leaf/Spine Switch Enterprise SONiC NOS Installed

  • Powered by Asterfusion’s Enterprise SONiC Distribution
  • Industry-leading ultra-low latency: 500ns with Teralynx 7
  • Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
  • Purpose-built for AI, ML, HPC, and Data Center applications
Please login to request a quote

100GbE Data Center Switch

Enterprise SONiC 64-Port 100G QSFP28 Data Center Leaf/Spine Switch Low Latency 

The Asterfusion CX564-N is a powerhouse 64x100G QSFP28 data center switch, engineered for leaf and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 6.4 Tbps, coupled with ultra-low latency and unmatched packet processing power.
With latency slashed to an incredible 500 nanoseconds, the CX564-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.

Enterprise SONiC 64-Port 100G QSFP28 Dater Center Leaf/Spine Switch Low Latency 

The Asterfusion CX564-N is a powerhouse 64x100G QSFP28 data center switch, engineered for leaf and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 6.4 Tbps, coupled with ultra-low latency and unmatched packet processing power.
With latency slashed to an incredible 500 nanoseconds, the CX564-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.

500ns

Ultra Low Latency

7600Mpps

Packet Forwarding Rate

70MB

Packet Buffer

KEY POINTS

Key Facts

500ns

Ultra Low Latency

7600Mpps

Packet Forwarding Rate

70MB

Packet Buffer

Optimized Software
for Advanced Workloads
AsterNOS

100G data center switch is pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX564-N doesn’t just perform — it outperforms, leaving competing products in the dust.

Optimized Software for Advanced Workloads – AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX564-N doesn’t just perform — it outperforms, leaving competing products in the dust.

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

Zero Packet Loss

RoCEv2 for AI/ML, HPC, Storage
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Automated O&M

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

asternos-bg

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Zero Packet Loss

RoCEv2 for AI/ML, HPC, Storage
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

Automated O&M

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

Specifications

Ports64x 100G QSFP28
2x 10G SFP+
Switch ChipTeralynx 7 IVM77500
CPUBroadwell DE D-1508Number of VLANs4,094
Switching Capacity6.4 TbpsMAC Address Entries44000
Packet Forwarding Rate7,600 MppsJumbo Frame9,216
SDRAMDDR4 16 GBARP Entries36000
SSD Memory128 GBIPv4/IPv6 RoutesIPv4: 96,000 Host + 72,000 Prefix
IPv6: 47,000 Host + 36,000 Prefix
Packet Buffer70 MBLatency~500ns
Hot-swappable AC Power Supplies2 (1+1 Redundancy)Max. Power Consumption<820W
Hot-swappable Fans4 (3+1 Redundancy)MTBF>100,000 Hours
Dimensions (HxWxD)88.8x 440x 560mmRack Units2 RU
Weight16kgThermal AirflowFront-to-Rear

Quality Certifications

ISO-9001-SGS
ISO-14001-SGS
ISO-45001-SGS
ISO-IEC-27001-SGS
fcc-icon
ce-icon

Features

Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

With Marvell Teralynx 7 at its core, Asterfusion 100G switches
achieve blazing-fast sub-500ns latency—perfect for AI/ML, HPC, and NVMe where speed drives results.
marvell-teralynx7
6.4 Tbps

Switching Capacity

7600 Mpps

Packet Forwarding Rate

500ns

End-to-end Latency

70 MB

Buffer Size

6.4 Tbps

Switching
Capacity

7600 Mpps

Packet
Forwarding Rate

500ns

End-to-end
Latency

70 MB

Buffer
Size

Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion 100G switches deliver full RoCEv2 support with ultra-low latency and
near-zero CPU overhead. Combined with PFC and ECN for advanced congestion control, it enables a true lossless network.
rocev2-vs-tcp

Asterfusion Network Monitoring & Visualization

AsterNOS supports Node Exporter to send CPU, traffic, packet loss, latency, and RoCE congestion metrics to Prometheus.
Paired with Grafana, it enables real-time, visual insight into network performance.
prometheus-paired-with-grafana

O&M in Minutes, Not Days

Automate with Easy RoCE, ZTP, Python and Ansible,  SPAN / ERSPAN Monitoring & more—cutting config time, errors, and costs.
Data center switch operation and management

Comparison of Data Center Switches

CX864E-NCX732Q-NCX664D-NCX564P-NCX532P-N
Port Speeds64x800G, 2x10G32x400G, 2x10G64x200G, 2x10G64x100G, 2x10G32x100G, 2x10G
Hot-swappable PSUs and FANs1+1 PSUs, 3+1 FANs1+1 PSUs, 5+1 FANs1+1 PSUs, 3+1 FANs1+1PSUs, 3+1 FANs1+1 PSUs, 4+1 FANs
MC-LAG
BGP/MP-BGP
EVPN
BGP EVPN-Multihoming
RoCEv2
PFC
ECN
In-band Telemetry
Packet Spray××××
Flowlet××××
Auto Load Balancing××××
INT-based Routing××××
WCMP××××

Panel

What’s in box

FAQs

What are the forwarding performance and total bandwidth of the 64×100G switch?
Under no packet loss conditions, our switch’s performance test metrics are: Packet Forwarding Rate: 7600 Mpps, Bandwidth: 6.4 Tbps.
Which RDMA-capable NICs (e.g., Mellanox ConnectX-4/5/6, Broadcom, Intel) and drivers have been validated on your CX-N series switches?

All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.

If you have any recommendations for setting up an NVMe-oF, any documentation/case studies?

We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~

We want your switches to do AI inference, Our working AI inference config looks somewhat like: Camera (SRT) -> Compute Node NIC -> GPU -> [Compute Node NIC -> NVMe-oF + Compute Node NIC -> Other Nodes for processing]. If you have any past setups similar to that, are there any solutions to include Camera SRT feed or GigE feed into RoCEv2 congestion management?

Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.

What are the key advantages of Asterfusion’s 64×100G data center switch based on the Marvell Teralynx 7 chip?
Compared with Broadcom Tomahawk2 (packet buffer 42MB, forwarding rate 4200 Mpps), the TL7-based Asterfusion 64x100G has a larger packet buffer (70MB) and a higher packet forwarding rate (7600 Mpps).
Does the solution support BGP-EVPN with VXLAN overlay? Can it be used to build a large Layer-2 fabric in a Spine-Leaf topology?
Our AsterNOS provides BGP-EVPN and VXLAN Overlay support, supporting 1,000 VXLAN tunnels and 4,000 VXLAN gateways, and meets the needs of modern data centers for flexible and efficient large-scale Layer 2 networks.