written by Asterfuison
In the 5G era, our network infrastructure faces new challenges: massive endpoints, ultra-high bandwidth and extreme low latency. For nowadays, network latency (delay) is no longer a strange term to most people, we often hear the word “network latency,” so what is network latency? What causes the network delay? How do you test your network ‘s latency and how to ultimately resolve it? This article will explore the above questions.
In networking terms, network latency = network delay
Network latency is a design and performance characteristic of telecommunication networks. It specifies the delay at which some data is transferred from one communication endpoint to another over the network. It is usually measured in multiples or fractions of seconds. Therefore, the lower the latency of the network, the faster the speed. So, if you need a high-speed connection, make sure your network has the lowest latency possible.
There are many reasons for network delay, which may include the following factors:
Since round -trip delay can measure from a single point, it’s more often used.
Switch delay measured from port to port on an Ethernet switch. It can be showed in several ways, relaying on the switching paradigm adopted by the switch. It can be measured by different methods such as, IEEE specification RFC2544, Ping Pong or Netperf.
With this analysis, network administrators can reroute packets to minimize network latency.
Latency, throughput and bandwidth are three important contributors to network transmission’s quality. Although these three factors work together, they measure different things. For easier understand it, we could imagine that data packets flow as high-traffic road.
Low latency and low bandwidth mean there will be low throughput. This means that while packets should technically be delivered without delay, but low bandwidth indicates that there can still be severe congestion. However, with high bandwidth and low latency, throughput is greater, and connections are more efficient
To cope with the rapid development of cloud network and reduce network latency with Ethernet switches in data center scenario, Asterfusion offers industry-leading low latency switches.
CX-N series’ offers ultra- low latency switching capabilities as low as ~500ns in most application scenarios. Allowing it to more than satisfy the requirements of latency-sensitive applications, such as NVME, IoT,VR/AR/MR,high-frequency trading, big data analysis, machine learning and etc.,
The CX-N switches supports RoCEv2 which allows remote direct memory access (RDMA) over an Ethernet network. It solves the delay of server-side data processing and frees up the CPU to do more important tasks, such as running applications and processing large amounts of data. It increases bandwidth and reduces latency, jitter, and CPU consumption, helping to build highly efficient and cost-effective data center networks.
Everyone experiences network latency in all aspects of their daily business, it can be a serious threat to deadlines, expected results, and ultimate ROI for network operators and enterprises. This is where Asterfusion CX-N ultra-Low Latency Switch comes in. The Teralynx ASIC’s ultra-low Latency switching capability with powerful QoS features for RoCEv2, which can effectively increase bandwidth and reduces latency, jitter. Asterfusion CX-N switches enable to build highly efficient and cost-effective data center networks.
Asterfusion Networks is the leading provider of open networking infrastructure solutions. We provide an open, disaggregated, and highly programmable network fabric for next generation data centers and campus with white-box switching.
For more information on Asterfusion’s products and solutions, please visit https://cloudswit.ch/. For sales inquiries, please send an email to [email protected]
Floor 4, Building A2, Shahutiandi, No.192 Tinglan Road, SIP
23F - E08, Dinghe Tower, Jintian Road, Futian, Shenzhen