Sound Transit in Seattle has unclogged WAN links that hindered railway-improvement projects by using WAN-optimisation gear that made traffic run faster and cost much less over time than other options.
Now applications on the network perform well, reducing transaction time for a test set of traffic by as much as 69%. And, as an unexpected bonus, the transit authority has also dramatically cut bandwidth unnecessarily dedicated to VoIP thanks to data gleaned from the Silver Peak WAN-optimisation gear it uses, says Garv Nayyar, senior integration engineer for the transit authority.
The Sound Transit WAN is arranged in a hub-and-spoke configuration, with a headquarters datacentre that feeds seven other sites via point-to-point T-1 connections.
Workers at the remote sites access a variety of applications, including Microsoft SharePoint for file sharing, Opentext Livelink EMC for document management, Primavera Expedition for project management, and Microsoft Exchange for email.Part of the project calls for workers to send construction photos and CAD drawings over the WAN, transfers that slowed down everything else on the links, Nayyar says.
“The problem was if somebody was pulling a 5MB file it just took the site down or it would slow them down for a few seconds, making it so other folks couldn’t work effectively,” he says.
End users complained about the delays and demanded that something be done.
“They wanted us to put either servers at all the local sites or add another T-1 line or possibly up to three T-1 lines to each site,” he says.
In addition to those possibilities, he looked into WAN-acceleration gear from Cisco , Riverbed and Silver Peak. Cisco kept pushing back the deadline for when its gear would be ready to test and Sound Transit couldn’t wait, Nayyar says. He could not get Riverbed gear to work and was told he needed an upgrade to his Cisco switch infrastructure, so he rejected them as well.
He says he got the Silver Peak gear up and running relatively quickly without any network upgrades.
Price was also a factor in his decision. Setting up separate servers in each branch so client-server traffic could be kept local would have cost about US$15,000 (NZ$21,550) a site, he says.
Adding T-1s would have cost US$350 a month per T-1, for a total between US$2,450 and US$7,350 a month extra, but the extra bandwidth would be overkill for the average amount of traffic sent over the lines.
The Silver Peak gear cost about US$90,000, he says, making it a cheaper option than placing local servers in all the branches, and less expensive than extra T-1s after three years.
Nayyar says he installed test Silver Peak boxes in about two hours, one at headquarters and one at the remote site that had the loudest complainers. He ran it as a test for two weeks and it improved application response times so much that end users complained when the test was over.
“When we took it down the remote site started to send complaints to their senior management saying we need that thing back on because we were getting a lot more work done,” he says.
He ran a baseline test of sample traffic over each WAN connection twice before turning up the Silver Peak equipment, and ran the same tests again after it was working. The smallest reduction in time to complete the test was 32% and the largest reduction was 69%, according to the numbers he gathered.
In setting aside 1Mbit/s of the 1.5Mbit/s available on the T-1s for data, Sound Transit found that the VoIP traffic on the network suffered. This was because the VoIP vendor had already dedicated 50% of the bandwidth to VoIP, so the voice and data were contending for the same bandwidth.
When the Silver Peak gear finally helped sort the problem out, the bandwidth dedicated to VoIP was cut to 110Kbit/s, and the voice worked fine, Nayyar says.
Using Silver Peak statistical data, Sound Transit has better network monitoring than it had before to determine whether performance on the network meets the promises IT has given to workers about service levels, he says.