As we close in on 2006, LAN switching has been a part of enterprise networks for years. Typically, as technology matures the focus moves away from “speeds and feeds” to other areas. Although many switches boast impressive features in areas such as QoS, virtual LANs and availability, raw throughput never leaves the scene.
With sophisticated stackables now on the market vendors are anxious to prove their stackable switching prowess. Therein lies the problem: how does one measure this?
Historically high-performance, high-density switches have been delivered in chassis-based platforms populated with multiple blades of switch ports. Traffic moving from ports on one blade to a port (or ports) on any other blade pass across the fabric or backplane of the switch. Not only is this processing invisible to the end user it is also invisible to standard test tools such as those offered by Spirent, Ixia, Shenick and others.
Industry throughput metrics have evolved around measuring the traffic that goes into and out of user ports. Any measurement of the backplane capacity has to be extrapolated from those results — and often requires fairly in-depth knowledge of the chassis vendor’s backplane architecture.
Now, stackable switches can be deployed in high-density configurations of 300 ports or more of Gigabit Ethernet. With such configurations the stacking technology becomes an integral part of the system and an area of great concern. If the architecture or implementation can’t deliver high throughput, the entire stack could suffer.
In essence, the stacking mechanism serves as an external backplane. In a configuration consisting of several hundred switch ports in a stack, it is likely that significant amounts of traffic will traverse the stackable links. While the individual switches are probably wire-speed, the prudent network architect will want to understand the performance levels offered across the stacking mechanism.
Ideally, that external backplane should offer the same level of performance as a top-notch backplane on a chassis-based switch — that is, wire-speed. Put another way, if each individual switch is capable, say, of 24Gbit/s, you would want a stack of ten to be able to push as close to 240Gbit/s as possible for traffic directed across the stacking ports.
Because most switches are built with Gigabit Ethernet port counts as multiples of 12, and most stacking approaches use multiples of 10G Ethernet for the stacking links, you are not likely to get a perfect match — but the closer the better.
Which leads to the challenge: how do we characterise and report performance across the stacking mechanism?
The easy way would be to add this number to the traditional port-to-port measurement and call the whole thing throughput. But this blurs what is being reported and confuses the issue. Doing it this way a switch stack with 200 ports could be advertised as having “400Gbit/s throughput.”
Aside from the obvious question of how 400 Gigabit Ethernet devices can be hooked up to 200 physical ports, such a characterisation offers nothing.
At The Tolly Group, we’ve adopted the term “stackable switching capacity” to describe the aggregate of throughput when we’re measuring not only what flows in and out of the user ports, but also the traffic that’s flowing across the inter-switch stacking ports.
So pay attention to both sets of numbers when evaluating stackable switch solutions.