DDoS attacks have remained on the front page again in 2012 for a very simple reason; they continue to attack the largest and most secure networks in the world, from governments’ web properties to Wall Street. Is this simply a function of increasing size of attacks that overwhelm these websites? Yes and no.
Arbor’s ATLAS internet monitoring system shows that without question, DDoS attacks are getting bigger, much bigger.
The average attack in September 2012 was 1.67Gbps, 72 per cent growth from September 2011. The number of mid-range attacks (2 to 10Gbps) is also up 14.35 per cent so far this year.
Furthermore, very large attacks (10Gbps and over) are up by 90 per cent this year over 2011 and the largest attack in 2012 was 100.84Gbps.
This increase in attack size has significant implications not only for service providers, but specifically enterprises that continue to rely on firewalls/IPS to protect them from DDoS attacks.
Because these devices have to keep state information on every session, they can easily be overwhelmed with botnet-based attacks. This often makes them among the first points of failure during DDoS attacks. The larger the attacks get, the more likely these devices are to fail.
All of that said, when it comes to DDoS, size isn’t everything. That is why it is best to deploy a layered defence strategy as a best practice for all enterprises.
The most robust defence is achieved by combining a cloud-based DDoS managed service that protects the network from larger attacks, together with an on-premise DDoS solution.
This will keep services available and to protect existing security infrastructure, such as the firewall and IPS, by detecting and mitigating application-layer attacks at the perimeter of the network.
Recent attacks prove it’s not all about size
Recent bank attacks in the United States show DDoS is not all about size.
These attacks are becoming increasingly complex. They often include multiple techniques and targets. Take, for example, the recent attacks on financial services companies in the US.
These attacks used a combination of attack tools with vectors mixing application layer attacks on HTTP, HTTPS and DNS with volumetric attack traffic on a variety of protocols including TCP, UDP, ICMP and others.
The other obvious and uncommon factor used in this series of attacks was the simultaneous attacks, at high bandwidth, to multiple companies in the same vertical.
Many of the compromised hosts used in these attacks were servers with significant upstream bandwidth at their disposal. The majority of these bots resulted from PHP web applications that were exploited.
Many Wordpress sites, often using the out-of-date TimThumb plugin, were being compromised around the same time. Joomla and other PHP-based applications were also used.
Often these were unmaintained servers that attackers uploaded PHP web shells to and then used the web shells to further deploy attack tools.
Attackers connected to the tools either directly or through intermediate servers/proxies/scripts and therefore the concept of command and control did not apply in the usual manner.
Without question, DDoS attacks are growing larger. More significantly, they are becoming increasingly complex, blending multiple attack tools, techniques and targets. One reason why DDoS remains such an effective weapon is that too many enterprise networks continue to rely on solutions that were designed for other problems, to combat DDoS.
The complex, rapidly evolving attack vector requires purpose-built tools on-premise as well as cloud-based security. This provides comprehensive protection against both large attacks and those that target the application layer.
Until we see pervasive deployment of best practices defences, we can expect to see DDoS in the headlines for many years to come.
Gary Sockrider is a network solutions architect at Arbor Networks.