Skip to main content

CX732Q-N-ORv3

OCP Rack ORv3 Data Center Switch for AI/ML/HPC, Enterprise SONiC Distribution

  • Standardized Open Compute Project form factor (ORv3)
  • 32 Port 400G QSFP-DD, 2 Port 10G SFP+
  • Powered by Asterfusion’s Enterprise SONiC Distribution
  • Support: RoCEv2, EVPN Multihoming, BGP-EVPN, INT, and more
  • Purpose-built for AI, ML, HPC, and Data Center applications
Please login to request a quote

OCP ORv3 Ready Data Center Switch with SONiC OS

Built to the rigorous standards of OCP, our switch integrates seamlessly into ORv3 racks. It leverages the 48V DC power bus bar architecture, reducing power conversion stages and increasing overall energy efficiency by up to 15% compared to traditional 12V systems.

Engineered for high-radix networks, the ORv3 switch supports ultra-high-density QSFP-DD ports. Whether you are building a non-blocking leaf-spine fabric or an AI backend network, our platform delivers wire-speed throughput with nanosecond-level latency.

Every unit comes pre-loaded with Enterprise SONiC OS, allowing you to get a turnkey solution. It supports features such as VXLAN, BGP-EVPN, EVPN multihoming, MC-LAG, ECMP, and DCI, and offers rich telemetry and automation via INT, REST API, gNMI, NETCONF, ZTP, Ansible, and Python.

With deep packet buffering and intelligent congestion management, the OCP network switch is optimized for the “All-to-All” traffic patterns typical of AI training and high-performance computing (HPC), ensuring zero packet loss for critical synchronization traffic.

32×400G QSFP-DD Low Latency Data Center Switch Enterprise SONiC Distribution Preload

The Asterfusion CX732Q-N-O is a high-performance 32-port 400G data center switch designed for the super spine layer in next-generation CLOS network architectures. Built on the Marvell Teralynx platform, it delivers 12.8 Tbps of line-rate L2/L3 throughput with ultra-low latency as low as 400ns, meeting the stringent performance demands of AI/ML clusters, hyperscale data centers, HPC, and cloud infrastructure.

KEY POINTS

ORv3

Open Rack Standard

500ns

Ultra Low Latency

32x400G

Port Density

ORv3/48VDC

Power Supply

Key Facts

500ns

Ultra Low Latency

↓20.4%

Inference Latency

↑27.5%

Token Generation Rate

Optimized Software
for Advanced Workloads
AsterNOS

AsterNOS is Asterfusion’s enterprise-grade SONiC distribution, purpose-built for performance and pre-installed on every CX Series switch. It combines the openness of community SONiC with enterprise-level reliability, enhanced usability, and real-world readiness.
Unlike the raw community version, AsterNOS is battle-tested and feature-rich — delivering the rock-solid stability, intuitive experience, and advanced capabilities that modern data centers demand.

Optimized Software for Advanced Workloads – AsterNOS

Pre-installed with Asterfusion Enterprise SONiC NOS, the CX732Q-N offers a robust feature set tailored for AI and HPC workloads, like Flowlet Switching, Packet Spray, WCMP, Auto Load Balancing, and INT-based Routing. The 400G switch is ideal for AI/ML infrastructure, hyperscale and cloud data centers, and high-performance computing (HPC).

Cloud Virtualization

DCI
VXLAN
BGP EVPN

RoCEv2

Easy RoCE
PFC/ECN/DCBX

High Availability

ECMP
MC-LAG
EVPN Multihoming

Ops Automation

In-band Telemetry
REST API/NETCONF/gNMI

Cloud Virtualization

DCI
VXLAN
BGP EVPN

RoCEv2

 PFC/ECN/DCBX
Easy RoCE

High Availability

ECMP
MC-LAG
EVPN Mulithoming

Ops Automation

REST API/NETCONF/gNMI
In-band Telemetry

Specifications

Ports32x 400G QSFP-DD
2x10G SFP+
Switch ChipTeralynx 7 IVM77700
CPUBroadwell DE D-1508Number of VLANs4,094
Switching Capacity12.8 Tbps MAC Address Entries44,000
Packet Forwarding Rate7,600 MppsJumbo Frame9,216
SDRAMDDR4 16 GB (Compatible with 64 GB)ARP Entries36,000
SSD Memory256 GBIPv4/IPv6 RoutesIPv4: 96,000 Host + 72,000 Prefix
IPv6: 47,000 Host + 36,000 Prefix
Packet Buffer70 MBLatency~500ns
Hot-swappable DC Power SuppliesDefault 1 (Optional 1+1 configuration available)Max. Power Consumption<1000W
Hot-swappable Fans6 (5+1 Redundancy)Dimensions (HxWxD)537x 45x 800mm
Operating Temperature0 - 40 ℃Thermal AirflowFront-to-Rear

Quality Certifications

ISO-9001-SGS
ISO-14001-SGS
ISO-45001-SGS
ISO-IEC-27001-SGS
fcc-icon
ce-icon

Features

OCP ORv3 Rack Standard

The OCP ORv3 rack standard improves power supply, heat dissipation, and cabling efficiency. The core advantages of the OCP ORv3 400G data center switch are: standardized open rack ecosystem (OCP ORv3, with optional ORv2 compatibility), 400G high bandwidth, and sustainability. 

marvell-teralynx7

Traditional Architecture VS OCP ORv3 Architecture

The ORv3 standard improves power supply capabilities and upgrades connector specifications for high-power devices. For example, it increases the maximum current limit per contact, which facilitates unified power supply racks and their integration with 400G switches, batteries, servers, and other devices.

marvell-teralynx7

Marvell Teralynx 7 Inside: Unleash Industry-Leading Performance & Ultra-Low Latency

With Marvell Teralynx 7 at its core, Asterfusion 400G switches
achieve blazing-fast 500ns latency—perfect for AI/ML, HPC, and NVMe where speed drives results.
marvell-teralynx7
12.8 Tbps

Switching Capacity

500ns

End-to-end Latency

70 MB

Packet Buffer

12.8 Tbps

Switching Capacity

500ns

Ultra Low Latency

70 MB

Buffer
Size

Full RoCEv2 Support for Ultra-Low Latency and Lossless Networking

Asterfusion 400G switches deliver full RoCEv2 support with ultra-low latency and
near-zero CPU overhead. Combined with PFC and ECN for advanced congestion control, it enables a true lossless network.
rocev2-vs-tcp

Asterfusion Network Monitoring & Visualization

AsterNOS supports Node Exporter to send CPU, traffic, packet loss, latency, and RoCE congestion metrics to Prometheus.
Paired with Grafana, it enables real-time, visual insight into network performance.
prometheus-paired-with-grafana

O&M in Minutes, Not Days

Automate with Easy RoCE, ZTP, Python and Ansible,  SPAN / ERSPAN Monitoring & more—cutting config time, errors, and costs.
400G switch operation and management

Panel Illustration

ORv3 Data Center-Switches-3
ocp-orv3-rear-panel

FAQs

What is an OCP ORv3 400G data center switch?
  • It is a data center switch that complies with the OCP Open Rack v3 (ORv3) standard and provides 400G Ethernet ports (for example, 32×400G or 64×400G), typically used for spine/leaf or AI/ML cluster interconnect.

  • Most products use 12.8T/25.6T ASICs (such as 32×400G or 64×400G), matching ORv3’s 48V power and high‑power cooling design.

What advantages does ORv3 rack have over traditional 19‑inch racks or ORv2?
  • ORv3 introduces a 48V bus and around 100A IT gear connector capability, increasing the per‑contact current rating compared with ORv2 and making it more suitable for high‑power 400G switches.

  • Standardized power shelves, busbars, and IT gear connectors allow servers and switches to share unified power within the rack, simplifying energy monitoring and device replacement.

What are the typical form factor and specs of an OCP ORv3 400G switch?
  • Common configurations include 32×400G QSFP‑DD (around 12.8T) or 64×400G (25.6T), sometimes with additional 10G/25G management or telemetry ports.

  • Chassis are usually 1U/2U with front‑to‑back or back‑to‑front airflow options, and hot‑swappable redundant PSUs supporting ORv3 AC/DC rack power architecture.

How are ORv3 400G switches typically used in AI/ML scenarios?
  • They act as TOR/leaf switches for GPU servers, providing 400G uplinks (for example 4×100G or 1×400G) to spine or core switches to build RoCEv2 fabrics for AI training clusters.

  • Combined with low‑latency ASICs and RoCEv2 optimizations (ECN, PFC, queue tuning), they form a lossless or near‑lossless network with high‑performance compute and storage nodes.

What are the cabling and optical module requirements for 400G?
  • Common pluggable formats include QSFP‑DD, using 400G DR4, FR4, SR8 optics or DAC/AOC cables, selected according to topology and link distance.

  • Because each 400G port carries very high traffic, optical quality, link budget, and testing requirements are stricter than for 100G; standardized selection and pre‑production validation are highly recommended.

Will our current 400G selection affect a future migration to 800G?
  • 400G remains mainstream now, but leading silicon and system vendors have published clear 800G roadmaps, so it is wise to choose suppliers with a transparent 400G→800G evolution path.

  • By planning cabling carefully (for example, using fiber and patching that can be reused for 400G/800G), you can keep most cabling assets and only replace switches and optics during future upgrades.