WAN-optimisation gear has evolved from one-trick point products – compression boxes, QoS appliances, TCP optimisers, caching devices – into equipment that performs all these functions and saves big money by moving traffic quicker across wide area links.
Here is a review of how these WAN optimisation controllers evolved, with an eye toward understanding the various functions these appliances perform, so businesses make better decisions about what is best for them.
WAN optimisation technology makes more efficient use of wide area links, to the point that businesses can actually reduce bandwidth or at least put off the need to buy more. Gartner says to expect the devices to pay for themselves within three years or don't buy them.
The goal is to make applications work better across long links connecting offices with datacentres, a result of server consolidation that pulled servers out of branch offices and centralised them. In many cases WAN optimisation controllers were actually required to make server consolidation work, because without it client-server interactions were intolerably slow.
While these devices can produce fantastic payback, they should always be tested first on each customer's network running each customer's typical traffic, because vendor performance can vary greatly depending on network conditions and application mix.
There is a key set of features to make this acceleration happen. The basic element of WAN optimisation is compression. It cuts the number of bits needed to transfer data, thereby reducing the time for data to cross the wire.
Another feature, TCP acceleration, improves response times by overriding TCP when it tries to throttle back the rate at which traffic is sent, because it is mistaking latency for congestion. These devices, located at both ends of WAN connections, maximise the sending rates and also build TCP back to full speed more quickly after it has dropped off.
Still another element, file caching, stores frequently requested files on disks within the WAN optimisation controllers themselves. When they are called for, they are delivered locally not over the WAN.
These all either reduce the number of bits that have to cross the connection or load the line with as much traffic as it can handle.
A final key feature is traffic shaping and prioritisation, the goal of which is to make sure certain classes of traffic respond faster relative to others that are deemed less important. Traffic shaping sets limits on how much bandwidth certain traffic types can get and which ones are allowed to go first when there is contention. This can be done by enforcing the quality of service assigned to each class of traffic.
About five years ago, WAN acceleration vendors started buying each other up in order to compile these core technologies. For example, Cisco bought Actona for its file caching capabilities in 2004. Citrix bought Orbital Data in 2006 for its TCP acceleration. Juniper bought Perebit for its compression technology in 2005.
These technologies are so good at their jobs that for a while they brushed aside gear that focused on enforcing QoS, says Joe Skorupa, a research vice president at Gartner. "If reducing traffic improved response time by 90 percent, QoS was no longer a critical tool," Skorupa says. "There was good enough performance for all applications without setting priorities. QoS became devalued as an asset."
As a result, Packeteer, which specialised in traffic shaping, bought up Mentat in 2004 and Tacit in 2006 for their TCP acceleration and caching technologies, respectively, to round out Packeteer's offerings.
Vendors have been revisiting the basics and climbing the protocol stack in their efforts to shave every possible fraction of a second off transaction times, with application acceleration being the most powerful.
This technology examines the back and forth communications between application servers and their clients and tries to streamline it. So, for instance, if a client initiates what looks to be a Microsoft Common Internet File System request, the intervening WAN optimisation controllers anticipate the next several requests. The server-side controller spoofs the requests, gathers the responses and ships them over all at once, thereby reducing the delay all those steps would have created had they each crossed the WAN.
Different vendors support fewer and greater numbers of individual applications.
In addition to adding new acceleration methods, vendors have revisited and improved on old ones. File caching has been upgraded to caching repetitive data streams rather than just files. So if a pattern of bytes is sent over and over – say, a graphic – the WAN optimisation controllers may store it and then call it up locally when required. Such a graphic being used in a Word document and in a separate PowerPoint, can be cached and reused without having to be retrieved from across the WAN, Skorupa says.
Caching in WAN optimisation gear can also override badly coded web applications that tag objects as non-cachable because the objects are dynamic and supposed to be served fresh. Some applications, though, mislabel static content that could legitimately be cached, such as company logos appearing on forms, Skorupa says. Some WAN optimisation gear can identify this static content and cache it anyway.
In addition to caching, QoS functionality is being refined as well. Whereas once QoS was limited to types of traffic, classes of service can now be defined by more parameters such as individual user, the application being used, the time of day, and so forth. The result is that each user transaction gets a more appropriate priority and share of available WAN bandwidth, further improving the efficiency of the wide area network.
QoS can also be tuned to limit or block certain applications' access to the WAN. In the past five years the amount of allowable but not critical WAN applications has blossomed, Skorupa says. QoS features can prioritise inappropriate applications such as Kazaa, non-strategic but perhaps useful traffic such as Twitter and the truly strategic such as order entry applications, so each gets the bandwidth it deserves or is blocked altogether.
Because WAN optimisation controllers sit at both ends of WAN connections, they have the ability to see all the traffic using the links. Some vendors have developed this monitoring potential to measure and report on the performance of individual applications to find trouble spots and manage them better.
Many vendors have developed WAN optimisation controller clients for remote access users to reap the benefits of optimisation, as they connect to corporate networks from home or on the road. These clients may also integrate with VPN clients in order to secure the connections.
No single vendor has all the features and some have developed dominance in certain aspects of WAN optimisation controller technology, Skorupa says. For instance, Citrix and Expand do well speeding up the use of virtual desktops. Silver Peak does well handling ultra, high-speed connections. Blue Coat does a good job with security. Cisco is good in some accounts, not so good in others, depending on how complex the network is, he says. Riverbed has a broad range, application-specific optimisation.
No network has a single set of optimisation needs, so it's a good idea to test the equipment with customers' specific mixes of applications and network conditions. If not on a live network, at least in a lab simulation, Skorupa says.
He suggests the optimisation of gear when businesses are planning the deployment of new applications. Depending on the application and the length of WAN connections, rolling the price of WAN optimisation devices into the cost of the application rollout may be cost-effective, he says.