Skip to main content

CX732Q-N-V2

32 x 400G QSFP-DD Data Center Leaf/Spine Switch with EVPN Multi-homing, Enterprise SONiC NOS, Marvell Falcon

  • Ultra-low latency 1900ns with Falcon
  • Powered by Asterfusion’s Enterprise SONiC Distribution
  • Purpose-built for AI, ML, HPC, and Data Center applications.
  • 96K IPv4 host + 72K prefix routes, 47K IPv6 host + 36K prefix routes
  • Traffic optimization: Flowlet, Packet Spray, WCMP, Auto Load Balancing
Please login to request a quote

400Gb Data Center Switch

Enterprise SONiC 32-Port 400G Data Center Core/Spine Switch

The Asterfusion CX732Q-N-V2 is a 32×400G QSFP-DD data center switch powered by the Marvell Falcon and preloaded with AsterNOS (Enterprise SONiC). It delivers deterministic performance with open, production-ready networking. With a throughput of 12.8 Tbps and a switching latency of ~1900 ns, the CX732Q-N-V2 is ideal for spine deployments.
Shipping with AsterNOS, the CX732Q-N-V2 combines SONiC openness with enterprise-grade stability and feature depth. It supports VXLAN, BGP-EVPN, EVPN multihoming, MC-LAG, ECMP, and DCI, and offers rich telemetry and automation via INT, REST API, gNMI, NETCONF, ZTP, Ansible, and Python.
The CX732Q-N-V2 optional supports IEEE 1588v2 PTPv2 and SMPTE 2059-2 standards through hardware and software integration. Nanosecond-level time synchronization across the network for AI/ML distributed training, financial trading, and time-sensitive applications.

Enterprise SONiC 32-Port 100G QSFP28 Dater Center Leaf/Spine Switch Low Latency 

The Asterfusion CX532-N is a powerhouse 32x100G QSFP28 data center switch, engineered for Top-of-Rack (ToR) and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 3.2 Tbps, coupled with ultra-low latency and unmatched packet processing power. With latency slashed to an incredible 500 nanoseconds, the CX532-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.

288K

IPv4 Routes

144K

IPv6 Routes

128K

MAC Address Entries

KEY POINTS

Key Facts

500ns

Ultra Low Latency

6300Mpps

Packet Forwarding Rate

70MB

Packet Buffer

Optimized Software
for Advanced Workloads
AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX732Q-N-V2 doesn’t just perform — it outperforms, leaving competing products in the dust.

Optimized Software for Advanced Workloads – AsterNOS

Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX532-N doesn’t just perform — it outperforms, leaving competing products in the dust.

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

Zero Packet Loss

RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Ops Automation

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

asternos-bg

Virtualization

VXLAN
BGP EVPN
EVPN Multihoming

High Reliability

ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF

Zero Packet Loss

RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP

Ops Automation

Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring

Specifications

Ports32x 400G QSFP-DD
2x 10G SFP+
Switch ChipMarvell Falcon
Switching Capacity12.8 TbpsNumber of VLANs4K
Forwarding Rate5600 MppsMAC Address Entries128K
Packet Buffer48 MBARP Entries92K
Latency~1900nsIPv4/IPv6 RoutesIPv4: 288K Host + 144K Prefix
IPv6: 288K Host + 144K Prefix
Hot-swappable AC Power Supplies2 (1+1 Redundancy)Dimensions (HxWxD)44x 440x 515mm
Hot-swappable Fans6 (5+1 Redundancy)Rack Units1 RU

Quality Certifications

ISO-9001-SGS
ISO-14001-SGS
ISO-45001-SGS
ISO-IEC-27001-SGS
fcc-icon
ce-icon
2-Year Warranty

Features

Asterfusion Network Monitoring & Visualization

AsterNOS supports Node Exporter to send CPU, traffic, packet loss, latency, and RoCE congestion metrics to Prometheus.
Paired with Grafana, it enables real-time, visual insight into network performance.
prometheus-paired-with-grafana

O&M in Minutes, Not Days

Automate with Easy RoCE, ZTP, Python and Ansible,  SPAN / ERSPAN Monitoring & more—cutting config time, errors, and costs.
400G switch operation and management

Comparison of Data Center Switches

CX732Q-N-V2CX532P-N-V2CX308P-48Y-N-V2
Port Speeds32x400G, 2x10G32x100G, 2x10G48x25G, 2x10G
Hot-swappable PSUs and FANs1+1 PSUs, 5+1 FANs1+1 PSUs, 4+1 FANs1+1 PSUs, 3+1 FANs
MC-LAG
BGP/MP-BGP
EVPN
BGP EVPN-Multihoming
RoCEv2
PFC
ECN
In-band Telemetry
Packet Spray×××
Flowlet×××
Auto Load Balancing×××
INT-based Routing×××
WCMP×××

Panel

32port 400G data center switch-5
32port 400G data center switch-6

FAQs

Which RDMA-capable NICs (e.g., Mellanox ConnectX-4/5/6, Broadcom, Intel) and drivers have been validated on your CX-N series switches?

All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.

If you have any recommendations for setting up an NVMe-oF, any documentation/case studies?

We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~

We want your switches to do AI inference, Our working AI inference config looks somewhat like: Camera (SRT) -> Compute Node NIC -> GPU -> [Compute Node NIC -> NVMe-oF + Compute Node NIC -> Other Nodes for processing]. If you have any past setups similar to that, are there any solutions to include Camera SRT feed or GigE feed into RoCEv2 congestion management?

Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.

For a configuration of 8–10 cameras and 3–4 compute nodes plus one NVMe-oF server, do you think a single switch would suffice, or should we consider a hierarchical switch setup?

Two switches are recommended for this scale to provide dual-homing for all devices, ensuring both redundancy and high availability for your deployment.