Table of Contents
Introduction
Asterfusion’s CX532P-N 32-port data center switch and CX564P-N 64-port 100G QSFP28 data center switch are both built on Marvell Teralynx silicon and preloaded with Enterprise SONiC NOS. The two models support line-rate L2/L3 switching with ultra-low latency and unmatched packet processing power.
Whether you’re scaling out a hyperscale fabric, building an AI edge inference cluster, or refreshing your core enterprise network, this FAQ-style blog compares and answers the most important questions about both models, so you can pick the right fit.
CX532P-N 32-port Data Center Switch


CX564P-N 64-port Data Center Switch


Hardware Capabilities
Q: What are the forwarding performance and total bandwidth of the 64×100G switch?
A: Under no packet loss conditions, our switch’s performance test metrics are: Packet Forwarding Rate: 7600 Mpps, Bandwidth: 6.4 Tbps.
Q: What are the forwarding performance and total bandwidth of the 32×100G switch?
A: Under no packet loss conditions, our switch’s performance test metrics are: Packet Forwarding Rate: 6300 Mpps, Bandwidth: 3.2 Tbps.
Q: What are the key advantages of Asterfusion’s 64×100G & 32×100G data center switch based on the Marvell Teralynx 7 chip?
A: Our switch has ultra-low latency of just 500 ns. Compared with Broadcom Tomahawk3 (packet buffer 64MB, forwarding rate 5210 Mpps), the TL7-based Asterfusion 100G data center switches have a larger packet buffer (70MB) and a higher packet forwarding rate (7600 Mpps).
Q: Is the switch compatible with mainstream 100G optical modules on the market (e.g., LR4, SR4, CWDM4, ZR, ZR4)?
A: We support 100G-SR4, LR4, CWDM4, ZR, and ZR4 standards, and are compatible with mainstream vendor modules. Our design follows standard specifications, and hardware tuning is required during integration. During testing, 5W modules are selected for use, and a sufficient power budget has been reserved. Since the typical power consumption of 100G ZR4 modules in the market is around 6W, there is no specific limitation on the number of supported modules.
Q: Are QSFP28 ports backward compatible? Do they support breakout to 4×25G or 2×50G?
A: The QSFP28 port supports 4×25G. In theory, 2×50G can be achieved through Gearbox conversion in the optical module’s DSP. However, this specification currently requires customization.
Q: Does the CX664D-N switch support mixed-speed environments (e.g., 100G / 50G / 25G)?
A: It supports both 100G and 50G data rates. Since the Serdes is 50G, 200G needs to be downgraded to 100G to break out into 4×25G.
Q: What types of optical modules are supported on 100G ports?
A: Our 100G ports support mainstream optical modules and cables, such as SR4, DR4, FR4, LR4, ZR4, DR1, SWDM4, CWDM4, etc.
Q: Does it support lossless interconnection with AI accelerator servers from vendors like NVIDIA and AMD?
A: Supported.
Q: Are 100G data center switches compatible with mixed deployments involving InfiniBand and traditional Ethernet devices? Are there any migration recommendations?
A: It supports hybrid deployment with traditional Ethernet devices. At the module level, as long as both ends use the same protocol, interoperability is ensured.
Enterprise SONiC Features
Q: Are automation interfaces such as gNMI, gRPC, REST API, and NetConf supported? Can the switch be integrated with OpenStack, Kubernetes, and Ansible?
A: Our AsterNOS fully supports these automation interfaces, providing flexible network management and configuration capabilities and enabling integration with mainstream automation and orchestration platforms, including OpenStack, Kubernetes, and Ansible.
Q: Does the solution support BGP-EVPN with VXLAN overlay? Can it be used to build a large Layer-2 fabric in a Spine-Leaf topology?
A: Our AsterNOS provides BGP-EVPN and VXLAN Overlay support, supports 1k vxlan tunnel, 4k vxlan gateway, and meets the needs of modern data centers for flexible and efficient large-scale Layer 2 networks.
Q: We plan to deploy a large Layer-2 fabric network. Does the solution support BGP EVPN / VXLAN overlay architecture, and does it support EVPN multi-homing?
A: Our AsterNOS provides BGP-EVPN and VXLAN Overlay support to meet the needs of modern data centers for flexible and efficient large-scale Layer 2 networks, and supports EVPN multi-homing to provide high availability and redundancy.
Q: Does the system support custom control plane plugins or BGP policies?
A: Thanks to its flexible containerized architecture and FRR stack, our AsterNOS supports custom control plane plugins and BGP policies.
Q: What are the key differences between Asterfusion Enterprise SONiC and the community SONiC version? Is it stable and production-ready?
Our Enterprise SONiC Distribution is based on open-source SONiC and has been deeply optimized for AIDC and enterprise scenarios. Compared to the community version of SONiC, it offers significant advantages in management interfaces (Rest API, NetConf/YANG, gNMI…), control plane features (Packet Spray, WCMP, INT Driven Routing, Anycast Gateway WiFi Roaming, EVPN-MH), hardware compatibility (Marvell Teralynx, Prestera; Broadcom Tomahawk III, Trident III), and production stability. For details, refer to the comparison list.
Application Scenarios
For AI/HPC/Financial Low-Latency Scenarios
Q: Does the switch support RoCEv2? What specific ECN, PFC, and DCB features are supported?
A: Supports RoCEv2, ECN/PFC/DCBX, and Easy RoCE, and configures the above parameters with one click.
Q: Does the switch support In-band Telemetry (INT) for network visibility? Can it be used for visualizing training traffic?
A: The INT collector can collect and report network status information in real time on the data plane. Our switches can provide training traffic with the following services: microburst detection, latency monitoring, and queue congestion reporting etc.
Q: In financial trading systems, does the switch support nanosecond-level time synchronization (e.g., via PTP v2)?
A: Due to limitations in hardware clock precision and processing capabilities, the TL7 series currently cannot achieve the time synchronization accuracy required for financial trading systems.
Q: Has the switch been certified for compatibility with major GPU vendors such as NVIDIA, AMD, or Intel?
A: Our switches have successfully passed compatibility verification with the NVIDIA H20 GPU.
Q: How Are 100G Data Center Switches Optimized for AI/ML Workloads?
A: Our switches integrate advanced intelligent flow control technologies to ensure efficient operation of AI/ML workloads, including INT Driven Routing, Flowlet-based ALB, WCMP, etc. For More
TCO & Commercial Considerations
Q: Compared to Arista and NVIDIA (Mellanox), where does Asterfusion offer a better price-performance ratio?
A: The cost-performance ratio is reflected in hardware procurement cost, power consumption, software licensing model, and overall TCO (Total Cost of Ownership).
Q: In multi-tenant environments, does the switch support VLAN and VRF isolation? Can it serve dual roles as both access and aggregation layer?
A: Our AsterNOS supports Layer 2 and Layer 3 tenant isolation based on VLAN and VRF, supporting up to 4K VLANs and 1K VRFs. It is also compatible with multi-tenant scenarios in large-scale VXLAN-EVPN architectures and provides a complete set of L2 and L3 features. Hardware specifications can be selected according to the application scenario.
Q: What is the average delivery time? Is custom optical module selection or pre-configuration service supported?
The average delivery time is one week. Optical module selection requires hardware debugging. Customization is supported but subject to meeting the MOQ (Minimum Order Quantity).
Q: What is the general supply delivery time? Are customized modules or system integration services available?
A: The average delivery time is one week; for orders over 100 units, it is approximately 4 to 8 weeks.
Q: Is PoC (Proof of Concept) testing support available? Can testing or initial setup be done remotely?
Support PoC test support, we recommend customers to buy the sample, we will apply for a very low good discount as the price of sample test!
Conclusion
The 64-port & 32-port 100G switch is your go-to solution for dense connectivity, low latency, and flexible automation in cloud, finance, or HPC environments. Powered by Teralynx and SONiC, it brings open networking to the core of modern data center design.
Ready to test it? Contact us for a free PoC unit or a tailored deployment plan.