Skip to main content

CX732Q-N

32 x 400G QSFP-DD Data Center Switch for AI/ML/HPC, Enterprise SONiC Distribution

  • Powered by Asterfusion’s Enterprise SONiC Distribution
  • Industry-leading ultra-low latency: 500ns with Teralynx 7
  • Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
  • Purpose-built for AI, ML, HPC, and Data Center applications
Please login to request a quote

Enterprise SONiC Distribution Preload 32 x 400G QSFP-DD Data Center Switch Built for Al, Cloud, and Hyperscale

The Asterfusion CX732Q-N is a high-performance 32-port 400G data center switch designed for the super spine layer in next-generation CLOS network architectures. Built on the Marvell Teralynx platform, it delivers 12.8 Tbps of line-rate L2/L3 throughput with ultra-low latency as low as 500ns, meeting the stringent performance demands of AI/ML clusters, hyperscale data centers, HPC, and cloud infrastructure.

32×400G QSFP-DD Low Latency Data Center Switch Enterprise SONiC Distribution Preload

The Asterfusion CX732Q-N is a high-performance 32-port 400G data center switch designed for the super spine layer in next-generation CLOS network architectures. Built on the Marvell Teralynx platform, it delivers 12.8 Tbps of line-rate L2/L3 throughput with ultra-low latency as low as 400ns, meeting the stringent performance demands of AI/ML clusters, hyperscale data centers, HPC, and cloud infrastructure.

KEY POINTS

500ns

Ultra
Low Latency

Outperform IB

27.5% Higher TGR
20.4% Lower Latency

70 MB

Packet
Buffer

Key Facts

500ns

Ultra Low Latency

Outperform IB

27.5% Higher TGR
20.4% Lower Latency

70 MB

Packet Buffer

Optimized Software
for Advanced Workloads
AsterNOS

AsterNOS is Asterfusion’s enterprise-grade SONiC distribution, purpose-built for performance and pre-installed on every CX Series switch. It combines the openness of community SONiC with enterprise-level reliability, enhanced usability, and real-world readiness.
Unlike the raw community version, AsterNOS is battle-tested and feature-rich — delivering the rock-solid stability, intuitive experience, and advanced capabilities that modern data centers demand.

Optimized Software for Advanced Workloads – AsterNOS

Pre-installed with Asterfusion Enterprise SONiC NOS, the CX732Q-N offers a robust feature set tailored for AI and HPC workloads, like Flowlet Switching, Packet Spray, WCMP, Auto Load Balancing, and INT-based Routing. The 400G switch is ideal for AI/ML infrastructure, hyperscale and cloud data centers, and high-performance computing (HPC).

Cloud Virtualization

DCI
VXLAN
BGP EVPN

RoCEv2

Easy RoCE
PFC/ECN/DCBX

High Availability

ECMP
MC-LAG
EVPN Multihoming

Ops Automation

In-band Telemetry
REST API/NETCONF/gNMI

RoCEv2

 
PFC/ECN/DCBX
Easy RoCE

Cloud Virtualization

DCI
VXLAN
BGP EVPN

High Availability

ECMP
MC-LAG
EVPN Mulithoming

Ops Automation

REST API/NETCONF/gNMI
In-band Telemetry

Specifications

Ports32x 400G QSFP-DD
2x10G SFP+
Switch ChipTeralynx 7 IVM77700
CPUBroadwell DE D-1508Number of VLANs4,094
Switching Capacity12.8 Tbps MAC Address Entries44,000
Packet Forwarding Rate7,600 MppsJumbo Frame9,216
SDRAMDDR4 16 GB (Compatible with 64 GB)ARP Entries36,000
SSD Memory128 GBIPv4/IPv6 RoutesIPv4: 96,000 Host + 72,000 Prefix
IPv6: 47,000 Host + 36,000 Prefix
Packet Buffer70 MBLatency~500ns
Hot-swappable AC Power Supplies2 (1+1 Redundancy)Max. Power Consumption<970W
Hot-swappable Fans6 (5+1 Redundancy)MTBF>100,000 Hours
Dimensions (HxWxD)44x 440x 515mmRack Units1 RU
Weight12KGThermal AirflowFront-to-Rear

Quality Certifications

ISO-9001-SGS
ISO-14001-SGS
ISO-45001-SGS
ISO-IEC-27001-SGS
fcc-icon
ce-icon

Features

Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

With Marvell Teralynx 7 at its core, Asterfusion 400G switches
achieve blazing-fast sub-500ns latency—perfect for AI/ML, HPC, and NVMe where speed drives results.
marvell-teralynx7
12.8 Tbps

Switching Capacity

500ns

End-to-end Latency

70 MB

Buffer Size

12.8 Tbps

Switching Capacity

500ns

Ultra Low Latency

70 MB

Buffer
Size

Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion 400G switches deliver full RoCEv2 support with ultra-low latency and
near-zero CPU overhead. Combined with PFC and ECN for advanced congestion control, it enables a true lossless network.
rocev2-vs-tcp

Asterfusion Network Monitoring & Visualization

AsterNOS supports Node Exporter to send CPU, traffic, packet loss, latency, and RoCE congestion metrics to Prometheus.
Paired with Grafana, it enables real-time, visual insight into network performance.
prometheus-paired-with-grafana

O&M in Minutes, Not Days

Automate with Easy RoCE, ZTP, Python and Ansible,  SPAN / ERSPAN Monitoring & more—cutting config time, errors, and costs.
400G switch operation and management

Asterfusion 400G Switch Outperforms InfiniBand in AI Inference with Higher TGR and Lower Latency

In AI inference networks, the Asterfusion 400G switch delivers higher TGR (Token Generation Rate) and lower P90 ITL (90th Percentile Inter-Token Latency) compared to InfiniBand, demonstrating faster inference speed and greater overall throughput performance.
token-generation-rate
90th-percentile-inference-time-latency
↑27.5%

Token Generation Rate

↓20.4%

Inference Latency

Token Generation Rate

↑27.5%

Token Generation Rate

P90ITL data -AI

↓20.4%

Inference Latency

Comparison of Data Center Switches

CX864E-NCX732Q-NCX664D-NCX564P-NCX532P-N
Port Speeds64x800G, 2x10G32x400G, 2x10G64x200G, 2x10G64x100G, 2x10G32x100G, 2x10G
Hot-swappable PSUs and FANs1+1 PSUs, 3+1 FANs1+1 PSUs, 5+1 FANs1+1 PSUs, 3+1 FANs1+1PSUs, 3+1 FANs1+1 PSUs, 4+1 FANs
MC-LAG
BGP/MP-BGP
EVPN
BGP EVPN-Multihoming
RoCEv2
PFC
ECN
In-band Telemetry
Packet Spray××××
Flowlet××××
Auto Load Balancing××××
INT-based Routing××××
WCMP××××

Panel

32x400g-qsfp-dd-switch-panel-1
32x400g-qsfp-dd-switch-panel-2

What’s in the Box

CX732Q-N in Box

FAQs

How many 400G-ZR optics can your switch support?

At ambient temperatures above 30°C, the system supports up to 8x 400G-ZR modules.
If the working environment is kept below 30°C, it can support up to 12–13 ZR modules.
Please note that the number of supported ZR optics is primarily limited by thermal constraints due to their high power consumption — the cooler the environment, the more modules can be reliably supported.

On the CX732Q-N, is 4x25G breakout supported? The ASIC seems capable, but I’m having trouble configuring it.

Yes, both hardware and software support 4x25G breakout on CX732Q-N.
For configuration guidance, please submit a ticket via our support portal:
👉 https://help.cloudswit.ch/portal/en/home
Our R&D team will assist you directly.

We want to use your switches for AI inference. Our pipeline includes Camera (SRT) → Compute Node NIC → GPU → [NIC → NVMe-oF or other nodes]. Can SRT or GigE traffic be integrated into RoCEv2 congestion management?

Yes, by classifying SRT or GigE traffic as high-priority at the switch ingress/egress, our switches can apply PFC/ECN-based congestion control similar to RDMA.
However, note that SRT has its own end-to-end congestion control at the application layer and doesn’t rely on RDMA-specific mechanisms like DCQCN.

Which RDMA-capable NICs (e.g., Mellanox ConnectX-4/5/6, Broadcom, Intel) and drivers have been validated on your CX-N series switches?

All major RDMA-capable NICs, including Mellanox ConnectX-4/5/6, Broadcom, and Intel, as well as their standard drivers, have been validated on our CX-N series switches.

I saw on your website that you compared your AI inference network solution with InfiniBand. Can you explain that in more detail?

In AI inference scenarios, Asterfusion’s 400G Ethernet switch demonstrates:

  • Higher TGR (Token Generation Rate)
  • Lower P90 ITL (90th Percentile Inter-Token Latency)

compared to InfiniBand, leading to faster inference performance and higher overall throughput.
This showcases Ethernet’s growing competitiveness in AI networking.

We’re considering a collapsed-core design using 100G/400G switching and routing. Does your hardware support this? And does it support WDM optics?

Yes, our switches can support WDM optics, subject to power budget limitations.
Our 400G platforms are fully programmable and well-suited for collapsed-core architectures.
Let’s arrange a technical discussion to explore your specific requirements and potential integration with firewall or edge functions.

Can you confirm the power consumption of your 400G switch (e.g. idle without optics)?

The typical power consumption is around 750W, and the no-load (idle without optics) power consumption is approximately 370W.

If you have any recommendations for setting up an NVMe-oF, any documentation/case studies?

We have successfully deployed NVMe-oF solutions for several customers. Please contact bd@cloudswit.ch to get case study~