Skip to main content

CX532P-N-V2

32 x 100G QSFP28 Data Center Leaf/Spine Switch with ROCEv2&EVPN Multi-homing, Enterprise SONiC NOS, Marvell Falcon

  • Powered by Asterfusion’s Enterprise SONiC Distribution
  • With 288K IPv4 and 144K IPv6 routes based on Falcon
  • Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
  • Purpose-built for AI, ML, HPC, and Data Center applications.
  • Superior Latency, Packet Forwarding, & Buffers than Competitors.
Please login to request a quote

100Gb Data Center Switch

Enterprise SONiC 32-Port 100G Data Center Leaf/Spine Switch

The Asterfusion CX532P-N-V2 is a 100G switch based on Falcon, which has one of the largest routing table capabilities in its class, with 288K IPv4 and 144K IPv6 routes. Because of this, it’s ideal for extensive networks of tenants, traffic routing at scale between east-west and north-south VXLAN tunnel endpoints (VTEPs). It delivers line-rate L2/L3 switching performance up to 2800 Mpps, coupled with 128K MAC address entries. Complete BGP-EVPN Assistance, smooth incorporation of L2/L3 gateway capability into spine-leaf topologies based on VXLAN.

Enterprise SONiC 32-Port 100G QSFP28 Dater Center Leaf/Spine Switch Low Latency 

The Asterfusion CX532-N is a powerhouse 32x100G QSFP28 data center switch, engineered for Top-of-Rack (ToR) and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 3.2 Tbps, coupled with ultra-low latency and unmatched packet processing power. With latency slashed to an incredible 500 nanoseconds, the CX532-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.

288K

IPv4 Routes

144K

IPv6 Routes

128K

MAC Address Entries

KEY POINTS

Key Facts

500ns

Ultra Low Latency

6300Mpps

Packet Forwarding Rate

70MB

Packet Buffer

Optimized Software
for Advanced Workloads
AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX532P-N-V2 doesn’t just perform — it outperforms, leaving competing products in the dust.

Optimized Software for Advanced Workloads – AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX532-N doesn’t just perform — it outperforms, leaving competing products in the dust.

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

Zero Packet Loss

RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Ops Automation

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

asternos-bg

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Zero Packet Loss

RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

Ops Automation

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

Specifications

Ports32x 100G QSFP28
2x 10G SFP+
Switch ChipMarvell Falcon
Switching Capacity3.2 TbpsNumber of VLANs4K
Forwarding Rate2800 MppsMAC Address Entries128K
Packet Buffer24 MBARP Entries36000
Latency~1400nsIPv4/IPv6 RoutesIPv4: 288K Host + 144K Prefix
IPv6: 288K Host + 144K Prefix
Hot-swappable AC Power Supplies2 (1+1 Redundancy)Max. Power Consumption<495W
Hot-swappable Fans5 (4+1 Redundancy)MTBF>100,000 Hours
Dimensions (HxWxD)44x 440x 515mmRack Units1 RU
Weight12KGThermal AirflowFront-to-Rear

Quality Certifications

ISO-9001-SGS
ISO-14001-SGS
ISO-45001-SGS
ISO-IEC-27001-SGS
fcc-icon
ce-icon
2-Year Warranty

Features

Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion 100G switches deliver full RoCEv2 support with ultra-low latency and
near-zero CPU overhead. Combined with PFC and ECN for advanced congestion control, it enables a true lossless network.
rocev2-vs-tcp

Asterfusion Network Monitoring & Visualization

AsterNOS supports Node Exporter to send CPU, traffic, packet loss, latency, and RoCE congestion metrics to Prometheus.
Paired with Grafana, it enables real-time, visual insight into network performance.
prometheus-paired-with-grafana

O&M in Minutes, Not Days

Automate with Easy RoCE, ZTP, Python and Ansible,  SPAN / ERSPAN Monitoring & more—cutting config time, errors, and costs.
400G switch operation and management

Comparison of Data Center Switches

CX864E-NCX732Q-NCX664D-NCX564P-NCX532P-N-V2
Port Speeds64x800G, 2x10G32x400G, 2x10G64x200G, 2x10G64x100G, 2x10G32x100G, 2x10G
Hot-swappable PSUs and FANs1+1 PSUs, 3+1 FANs1+1 PSUs, 5+1 FANs1+1 PSUs, 3+1 FANs1+1PSUs, 3+1 FANs1+1 PSUs, 4+1 FANs
MC-LAG
BGP/MP-BGP
EVPN
BGP EVPN-Multihoming
RoCEv2
PFC
ECN
In-band Telemetry
Packet Spray××××
Flowlet××××
Auto Load Balancing××××
INT-based Routing××××
WCMP××××

Panel

FAQs

Which RDMA-capable NICs (e.g., Mellanox ConnectX-4/5/6, Broadcom, Intel) and drivers have been validated on your CX-N series switches?

All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.

If you have any recommendations for setting up an NVMe-oF, any documentation/case studies?

We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~

We want your switches to do AI inference, Our working AI inference config looks somewhat like: Camera (SRT) -> Compute Node NIC -> GPU -> [Compute Node NIC -> NVMe-oF + Compute Node NIC -> Other Nodes for processing]. If you have any past setups similar to that, are there any solutions to include Camera SRT feed or GigE feed into RoCEv2 congestion management?

Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.

For a configuration of 8–10 cameras and 3–4 compute nodes plus one NVMe-oF server, do you think a single switch would suffice, or should we consider a hierarchical switch setup?

Two switches are recommended for this scale to provide dual-homing for all devices, ensuring both redundancy and high availability for your deployment.