Skip to main content

CX532P-N

32 x 100G QSFP28 Data Center Low Latency Leaf/Spine Switch with ROCEv2&EVPN Multi-homing, Enterprise SONiC NOS, Marvell Teralynx

  • Powered by Asterfusion’s Enterprise SONiC Distribution
  • Industry-leading ultra-low latency: 500ns with Teralynx 7
  • Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
  • Purpose-built for AI, ML, HPC, and Data Center applications.
  • Superior Latency, Packet Forwarding, & Buffers than Competitors.
Please login to request a quote

100GbE Data Center Switch

Enterprise SONiC 32-Port 100G QSFP28 Data Center Leaf/Spine Switch Low Latency 

The Asterfusion CX532P-N is a 32x100G QSFP28 data center switch, engineered for Top-of-Rack (ToR) and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 3.2 Tbps, coupled with ultra-low latency and unmatched packet processing power. With latency slashed to an incredible 500 nanoseconds, the CX532P-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.

Enterprise SONiC 32-Port 100G QSFP28 Dater Center Leaf/Spine Switch Low Latency 

The Asterfusion CX532-N is a powerhouse 32x100G QSFP28 data center switch, engineered for Top-of-Rack (ToR) and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 3.2 Tbps, coupled with ultra-low latency and unmatched packet processing power. With latency slashed to an incredible 500 nanoseconds, the CX532-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.

500ns

Ultra Low Latency

6300Mpps

Packet Forwarding Rate

70MB

Packet Buffer

KEY POINTS

Key Facts

500ns

Ultra Low Latency

6300Mpps

Packet Forwarding Rate

70MB

Packet Buffer

Optimized Software
for Advanced Workloads
AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX532P-N doesn’t just perform — it outperforms, leaving competing products in the dust.

Optimized Software for Advanced Workloads – AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX532-N doesn’t just perform — it outperforms, leaving competing products in the dust.

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

Zero Packet Loss

RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Ops Automation

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

asternos-bg

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Zero Packet Loss

RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

Ops Automation

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

Specifications

Ports32x 100G QSFP28
2x 10G SFP+
Switch ChipTeralynx 7 IVM77500
CPUBroadwell DE D-1508Number of VLANs4,094
Switching Capacity3.2 TbpsMAC Address Entries44000
Packet Forwarding Rate6,300 MppsJumbo Frame9,216
SDRAMDDR4 16 GBARP Entries36000
SSD Memory128 GBIPv4/IPv6 RoutesIPv4: 96,000 Host + 72,000 Prefix
IPv6: 47,000 Host + 36,000 Prefix
Packet Buffer70 MBLatency~500ns
Hot-swappable AC Power Supplies2 (1+1 Redundancy)Max. Power Consumption<710W
Hot-swappable Fans5 (4+1 Redundancy)MTBF>100,000 Hours
Dimensions (HxWxD)44x 440x 515mmRack Units1 RU
Weight12KGThermal AirflowFront-to-Rear

Quality Certifications

ISO-9001-SGS
ISO-14001-SGS
ISO-45001-SGS
ISO-IEC-27001-SGS
fcc-icon
ce-icon
2-Year Warranty

Features

Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

With Marvell Teralynx 7 at its core, Asterfusion 100G switches
achieve blazing-fast sub-500ns latency—perfect for AI/ML, HPC, and NVMe where speed drives results.
marvell-teralynx7
3.2 Tbps

Switching Capacity

6300Mpps

Packet Forwarding Rate

500ns

End-to-end Latency

70 MB

Packet Buffer

3.2 Tbps

Switching
Capacity

6300Mpps

Packet
Forwarding Rate

500ns

Ultra
Low Latency

70 MB

Packet
Buffer

Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion 100G switches deliver full RoCEv2 support with ultra-low latency and
near-zero CPU overhead. Combined with PFC and ECN for advanced congestion control, it enables a true lossless network.
rocev2-vs-tcp

Asterfusion Network Monitoring & Visualization

AsterNOS supports Node Exporter to send CPU, traffic, packet loss, latency, and RoCE congestion metrics to Prometheus.
Paired with Grafana, it enables real-time, visual insight into network performance.
prometheus-paired-with-grafana

O&M in Minutes, Not Days

Automate with Easy RoCE, ZTP, Python and Ansible,  SPAN / ERSPAN Monitoring & more—cutting config time, errors, and costs.
data center switch operation and management

Comparison of Data Center Switches

CX864E-NCX732Q-NCX664D-NCX564P-NCX532P-N
Port Speeds64x800G, 2x10G32x400G, 2x10G64x200G, 2x10G64x100G, 2x10G32x100G, 2x10G
Hot-swappable PSUs and FANs1+1 PSUs, 3+1 FANs1+1 PSUs, 5+1 FANs1+1 PSUs, 3+1 FANs1+1PSUs, 3+1 FANs1+1 PSUs, 4+1 FANs
MC-LAG
BGP/MP-BGP
EVPN
BGP EVPN-Multihoming
RoCEv2
PFC
ECN
In-band Telemetry
Packet Spray××××
Flowlet××××
Auto Load Balancing××××
INT-based Routing××××
WCMP××××

Panel

32 port 100G data center switch panel-1
32 port 100G data center switch panel-2

What’s in box

FAQs

Does the switch support RoCEv2? What specific ECN, PFC, and DCB features are supported?
Supports RoCEv2, ECN/PFC/DCBX, and Easy RoCE, and configures the above parameters with one click.
Which RDMA-capable NICs (e.g., Mellanox ConnectX-4/5/6, Broadcom, Intel) and drivers have been validated on your CX-N series switches?

All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.

If you have any recommendations for setting up an NVMe-oF, any documentation/case studies?

We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~

We want your switches to do AI inference, Our working AI inference config looks somewhat like: Camera (SRT) -> Compute Node NIC -> GPU -> [Compute Node NIC -> NVMe-oF + Compute Node NIC -> Other Nodes for processing]. If you have any past setups similar to that, are there any solutions to include Camera SRT feed or GigE feed into RoCEv2 congestion management?

Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.

For a configuration of 8–10 cameras and 3–4 compute nodes plus one NVMe-oF server, do you think a single switch would suffice, or should we consider a hierarchical switch setup?

Two switches are recommended for this scale to provide dual-homing for all devices, ensuring both redundancy and high availability for your deployment.

Does the switch support containerized applications (e.g., PB, INT, ZTP)? Is it based on the FRR routing stack?
Our AsterNOS inherits SONiC’s containerized architecture, fully supporting containerized applications such as Packet Broker and In-band Network Telemetry. Additionally, based on the FRRouting (FRR) routing stack, it provides robust and reliable control plane functionality.
Are automation interfaces such as gNMI, gRPC, REST API, and NetConf supported? Can the switch be integrated with OpenStack, Kubernetes, and Ansible?
Our AsterNOS fully supports these automation interfaces, providing flexible network management and configuration capabilities and enabling integration with mainstream automation and orchestration platforms, including OpenStack, Kubernetes, and Ansible.
Is the switch compatible with mainstream 100G optical modules on the market (e.g., LR4, SR4, CWDM4, ZR, ZR4)?
We support 100G-SR4, LR4, CWDM4, ZR, and ZR4 standards, and are compatible with mainstream vendor modules. Our design follows standard specifications, and hardware tuning is required during integration. During testing, 5W modules are selected for use, and a sufficient power budget has been reserved. Since the typical power consumption of 100G ZR4 modules in the market is around 6W, there is no specific limitation on the number of supported modules.