FRAMINGHAM (10/17/2003) - We asked vendors to supply two switch chassis, up to four 10G Ethernet interfaces, and a total of 24 Gigabit Ethernet interfaces. As in an earlier review, we assessed device performance in terms of pure 10G bit/sec throughput, delay and jitter; 1G bit/sec throughput, delay and jitter across a 10 Gigabit backbone; failover times; and quality-of-service enforcement. For this review, we also added enhanced failover tests and new events for IPv6 forwarding and routing.
Our primary test instrument was the SmartBits performance analysis system from Spirent Communications, equipped with XLW-3720A TeraMetrics 10G Ethernet cards and LAN-3311 TeraMetrics Gigabit Ethernet cards. We used Spirent PLC's SAI, SmartFlow, and TeraRouting applications to generate traffic.
For the 10G Ethernet and backbone tests, the test traffic consisted of 64-, 256-, and 1,518-byte Ethernet frames for IPv4 traffic; we used 76-byte frames in IPv6 testing because this is the minimum allowed by the test equipment. The duration for all tests was 60 seconds, and the timestamp resolution of the SmartBits was plus or minus 100 nanosec.
In the 10G Ethernet tests, we asked vendors to assign a different IP subnet to each of four 10G interfaces in one chassis. We configured the SmartBits to offer traffic from 510 virtual hosts per interface in a fully meshed pattern (meaning traffic was destined for all other interfaces). We measured throughput, average delay at 10 percent load and jitter.
In the backbone tests, we asked vendors to set up two chassis, each equipped with one 10G Ethernet interface and 10 edge interfaces using Gigabit Ethernet. Here again, we asked vendors to assign a different IP subnet to each edge interface and we configured the SmartBits to offer traffic from 510 virtual hosts per interface. This time, we offered traffic in a partially meshed multiple-device pattern; as defined in RFC 2889, that means the traffic we offered to one chassis was destined to all interfaces on the other chassis and vice versa. Once again, the metrics were throughput, average delay at 10 percent load and jitter.
In the failover tests, we set up two chassis, each equipped with one Gigabit Ethernet and two 10G Ethernet interfaces. We asked vendors to configure Open Shortest Path First metrics to that one 10G Ethernet interface, which would act as a primary route, and one would function as a secondary.
We offered a single flow of 64-byte frames to one Gigabit Ethernet interface at a rate of 100,000 frames per second; thus, we transmitted one frame every 10 microsec. Approximately 10 seconds into the test, we physically disconnected the primary link, forcing the switch to reroute traffic onto the secondary path. We derived failover time from frame loss. We then repeated the same test with 2 million flows, forcing 1 million to be failed over, and again derived failover time from frame loss.
In the QoS enforcement tests, we set up two chassis, each equipped with 12 Gigabit Ethernet interfaces and one 10G Ethernet backbone interface. Because we offered all 24 edge interfaces 128-byte frames at line rate in a partially meshed pattern, we congested the switches by a 12-to-10 ratio. For this test we offered three classes of traffic in a 1-to-7-to-4 ratio.
We asked vendors to enforce four conditions. First, they would have to mark incoming frames using specified Differentiated Services code points, something we verified by capturing and decoding traffic. Second, of the three traffic classes we offered, the switches should have delivered all high-priority traffic without loss. Third, the switches should have limited the rate of low-priority traffic so that it would not consume more than 2G bit/sec of backbone capacity. Finally, the switches should have allocated any remaining bandwidth to medium-priority traffic.
As a check against allocating a fixed amount of bandwidth to high-priority traffic, we reran the tests with only medium- and low-priority traffic present in a 9-to-3 ratio.
Vendors were not allowed to reconfigure devices between the first and second tests, and we expected the switches to allocate bandwidth previously used by high-priority traffic to the other classes.
In the IPv6 routing tests, we used the same topology as the backbone tests: two chassis connected with a single 10G link and 10 (single) gigabit Ethernet interfaces on each chassis. Using TeraRouting software, we advertised 100,000 networks using OSPFv3 and verified that the system correctly propagated all networks. Then TeraRouting offered traffic from 250 virtual hosts on each network to all other networks across the backbone.