CX732Q-N
32 x 400G QSFP-DD Data Center Switch for AI/ML/HPC, Enterprise SONiC Distribution
-
Powered by Asterfusion’s Enterprise SONiC Distribution
-
Industry-leading ultra-low latency: 500ns with Teralynx 7
-
Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
-
Purpose-built for AI, ML, HPC, and Data Center applications
Enterprise SONiC Distribution Preload 32 x 400G QSFP-DD Data Center Switch Built for Al, Cloud, and Hyperscale
The Asterfusion CX732Q-N is a high-performance 32-port 400G data center switch designed for the super spine layer in next-generation CLOS network architectures. Built on the Marvell Teralynx platform, it delivers 12.8 Tbps of line-rate L2/L3 throughput with ultra-low latency as low as 500ns, meeting the stringent performance demands of AI/ML clusters, hyperscale data centers, HPC, and cloud infrastructure.
32×400G QSFP-DD Low Latency Data Center Switch Enterprise SONiC Distribution Preload
The Asterfusion CX732Q-N is a high-performance 32-port 400G data center switch designed for the super spine layer in next-generation CLOS network architectures. Built on the Marvell Teralynx platform, it delivers 12.8 Tbps of line-rate L2/L3 throughput with ultra-low latency as low as 400ns, meeting the stringent performance demands of AI/ML clusters, hyperscale data centers, HPC, and cloud infrastructure.
KEY POINTS
500ns
Outperform IB
70 MB
Packet
Buffer
Key Facts
500ns
Outperform IB
70 MB
Packet Buffer
Optimized Software
for Advanced Workloads
– AsterNOS
Optimized Software for Advanced Workloads – AsterNOS
Pre-installed with Asterfusion Enterprise SONiC NOS, the CX732Q-N offers a robust feature set tailored for AI and HPC workloads, like Flowlet Switching, Packet Spray, WCMP, Auto Load Balancing, and INT-based Routing.
Cloud Virtualization
DCI
VXLAN
BGP EVPN
RoCEv2
Easy RoCE
PFC/ECN/DCBX
High Availability
ECMP
MC-LAG
EVPN Multihoming
Ops Automation
In-band Telemetry
REST API/NETCONF/gNMI
RoCEv2
Cloud Virtualization
High Availability
Ops Automation
Specifications
|
Quality Certifications






Features
Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

12.8 Tbps
Switching Capacity
500ns
End-to-end Latency
70 MB
Buffer Size
12.8 Tbps
Switching Capacity
500ns
Ultra Low Latency
70 MB
Buffer
Size
Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion Network Monitoring & Visualization

O&M in Minutes, Not Days

Asterfusion 400G Switch Outperforms InfiniBand in AI Inference with Higher TGR and Lower Latency


↑27.5%
Token Generation Rate
↓20.4%
Inference Latency

↑27.5%
Token Generation Rate

↓20.4%
Inference Latency
Comparison of Data Center Switches
|
Panel


What’s in the Box

FAQs
At ambient temperatures above 30°C, the system supports up to 8x 400G-ZR modules.
If the working environment is kept below 30°C, it can support up to 12–13 ZR modules.
Please note that the number of supported ZR optics is primarily limited by thermal constraints due to their high power consumption — the cooler the environment, the more modules can be reliably supported.
Yes, both hardware and software support 4x25G breakout on CX732Q-N.
For configuration guidance, please submit a ticket via our support portal:
👉 https://help.cloudswit.ch/portal/en/home
Our R&D team will assist you directly.
Yes, by classifying SRT or GigE traffic as high-priority at the switch ingress/egress, our switches can apply PFC/ECN-based congestion control similar to RDMA.
However, note that SRT has its own end-to-end congestion control at the application layer and doesn’t rely on RDMA-specific mechanisms like DCQCN.
All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.
In AI inference scenarios, Asterfusion’s 400G Ethernet switch demonstrates:
- Higher TGR (Token Generation Rate)
- Lower P90 ITL (90th Percentile Inter-Token Latency)
compared to InfiniBand, leading to faster inference performance and higher overall throughput.
This showcases Ethernet’s growing competitiveness in AI networking.
Yes, our switches can support WDM optics, subject to power budget limitations.
Our 400G platforms are fully programmable and well-suited for collapsed-core architectures.
Let’s arrange a technical discussion to explore your specific requirements and potential integration with firewall or edge functions.
The typical power consumption is around 750W, and the no-load (idle without optics) power consumption is approximately 370W.
We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~