Newsletter
Home > Information > News & Events > News > Converged Ethernet vs. more links
Navigation
 

Converged Ethernet vs. more links

Converged Ethernet vs. more links

by Jim O'Reilly – networkcomputing.com

Converged Enhanced Ethernet improves Ethernet performance but is expensive. With the cost of 10 GbE equipment dropping and 25 GbE on the horizon, it may be cheaper to add more links.

Though benchmarks often don't show it, Ethernet is a sloppy transmission vehicle. It relies on a collision mechanism that resolves two sources addressing the same receiver by forcing them to turn around and try again, both using random timeouts to do so. This collision process reduces performance, possibly by considerable amounts, and it introduces extra latency.

Performance degradation hits throughput-sensitive operations such as networked storage, while added latency affects time-sensitive functions such as messaging between financial systems.

One solution to the problem is to add buffers in the system, allowing data to be held up for a very brief period until the receiving port is free of the previous communication. This is Converged Enhanced Ethernet (CEE). Typically, the buffers in effect are added to the output side of switches, with each port able to stack several messages. A throttling mechanism that momentarily stops transfers is provided when the allocated buffer is filled up. The added complexity and the lack of intense competition add to the price of CEE gear.

Another method to resolve the collision issue is seen in expensive ATCA blade systems, where each server is directly connected to all the other servers in the blade chassis. Since there are only two end-node devices on each link, collisions can't occur, and performance is high, while latency is very low. Unfortunately, such systems don't scale.

Rapidly falling prices for 10 Gigabit Ethernet (GbE) gear present a new solution. Performance can be obtained by adding more links to each end node (server or storage unit). This manifests as either discreet 10 GbE links, for instance, or as ganged channels that aggregate to 2x or 4x the performance. There are 40 GbE switches and NICs available, using four 10 GbE links with each datagram being spread over the four channels.

The 40 GbE approach certainly improves performance, but the effective throughput of a 40 GbE system as a percentage of theoretical maximum is similar to that for a 10 GbE system. "Go-around" latency is reduced, since completion time for a datagram is one quarter of that on a 10 GbE system.

Now there is silicon to support a 25 GbE link, giving us 25/50/100 connections. This provides a 2.5 x boost in a single link over the links currently used in 10 GbE and 40 GbE systems. The new links can use previously installed cabling, so upgrades are relatively inexpensive, and, again, both performance and latency improve. However, because switches with 25 GbE equipment are much more complex, it will be a while before they're available.

We have a situation, then, where the fastest performance is likely to come from a higher channel count of "standard" Ethernet, especially with 25 GbE links. In fact, latency may improve to the point that the standard solution with fast links closes in on the converged solution. Of course, results will vary by use case and customer.

The debate shifts to the TCO plane, with the cost of a converged environment being much higher than a standard configuration. The difference in cost might justify more links of standard Ethernet, or an upgrade to 25 GbE, versus a move to converged gear.

Overall, it looks likely that the availability of faster Ethernet links and ganged multi-link connections will slow deployment of Converged Ethernet.