CX732Q-N-V2
32 x 400G QSFP-DD Data Center Leaf/Spine Switch with EVPN Multi-homing, Enterprise SONiC NOS, Marvell Falcon
-
PTP accurayc 20ns with Falcon
-
Powered by Asterfusion’s Enterprise SONiC Distribution
-
Purpose-built for Broadcast, Finance, Telecom and Smart Grid scenarios
-
With 288K IPv4 and 144K IPv6 host/prefix based on Falcon
400Gb Data Center Switch
Enterprise SONiC 32-Port 400G Data Center Core/Spine Switch
The Asterfusion CX732Q-N-V2 is a 32×400G QSFP-DD data center switch powered by the Marvell Falcon and preloaded with AsterNOS (Enterprise SONiC). It delivers deterministic performance with open, production-ready networking. With a throughput of 12.8 Tbps and a switching latency of ~1500 ns, the CX732Q-N-V2 is ideal for spine deployments.
Shipping with AsterNOS, the CX732Q-N-V2 combines SONiC openness with enterprise-grade stability and feature depth. It supports VXLAN, BGP-EVPN, EVPN multihoming, MC-LAG, ECMP, and DCI, and offers rich telemetry and automation via INT, REST API, gNMI, NETCONF, ZTP, Ansible, and Python.
The CX732Q-N-V2 optional supports IEEE 1588v2 PTPv2 and SMPTE 2059-2 standards through hardware and software integration. Nanosecond-level time synchronization across the network for AI/ML distributed training, financial trading, and time-sensitive applications.
Enterprise SONiC 32-Port 100G QSFP28 Dater Center Leaf/Spine Switch Low Latency
The Asterfusion CX532-N is a powerhouse 32x100G QSFP28 data center switch, engineered for Top-of-Rack (ToR) and spine architectures. Driven by the cutting-edge Marvell Teralynx 7 ASIC, it delivers blazing-fast line-rate L2/L3 switching performance up to 3.2 Tbps, coupled with ultra-low latency and unmatched packet processing power. With latency slashed to an incredible 500 nanoseconds, the CX532-N is tailor-made for latency-sensitive workloads like AI, HPC, and machine learning—ensuring rock-solid, high-throughput performance even under intense small-packet traffic bursts.
288K
144K
IPv6 Routes
128K
MAC Address Entries
KEY POINTS
Key Facts
500ns
6300Mpps
Packet Forwarding Rate
70MB
Packet Buffer
Optimized Software
for Advanced Workloads
– AsterNOS
Optimized Software for Advanced Workloads – AsterNOS
Pre-installed with Enterprise SONiC, it supports advanced features such as RoCEv2 and EVPN multihoming, combining lightning-fast forwarding rates with massively oversized packet buffers. The CX532-N doesn’t just perform — it outperforms, leaving competing products in the dust.
Virtualization
VXLAN
BGP EVPN
EVPN Multihoming
Zero Packet Loss
RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP
High Reliability
ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF
Ops Automation
Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring
Virtualization
VXLAN
BGP EVPN
EVPN Multihoming
High Reliability
ECMP/Elastic Hash
MC-LAG
BFD for BGP & OSPF
Zero Packet Loss
RoCEv2 for AI/ML/HPC
PFC/PFC Watchdog/ECN
QCN/DCQCN/DCTCP
Ops Automation
Python & Ansible, ZTP
In-band Network Telemetry
SPAN/ERSPAN Monitoring
Specifications
|
Quality Certifications
Features
PTP Support for Sub-Microsecond Precision Timing(Optional)
Asterfusion Network Monitoring & Visualization
O&M in Minutes, Not Days
Comparison of Data Center Switches
|
Panel
FAQs
The main difference lies in the ASIC used: the CX-N series uses the Teralynx chipset, while the CX-N-V2 series uses the Prestera Falcon chipset. The choice of chipset determines the performance differences between the two. The first notable difference is in support for IPv4 and IPv6 routing table entries: CX-N-V2 products with the Falcon chipset offer larger tables, supporting 288K IPv4 entries and 144K IPv6 entries. The second significant difference is PTP support: CX-N-V2 with Falcon supports PTP, whereas the CX-N series does not.
Both series run Enterprise SONiC NOS. They support L2/L3 functions, EVPN multi-homing, RoCE, and other features.
If you require PTP functionality and larger routing table capacity, the CX-N-V2 series is the appropriate choice.
All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.
We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~
Yes, by marking or prioritizing the camera SRT or GigE traffic at the switch egress or ingress, the switches can treat it as high-priority and apply PFC/ECN-based congestion management, similar to RDMA traffic. However, SRT itself implements its end-to-end congestion control at the application layer and does not utilize or require DCQCN, which is specific to RDMA flows.
Two switches are recommended for this scale to provide dual-homing for all devices, ensuring both redundancy and high availability for your deployment.



