In four of my eight books on Windows security, I've preached host hardening, explaining to readers how to fine-tune their computers beyond the defaults to decrease the attack surface. Probably a quarter of the articles I wrote in the last decade, out of hundreds, were about host hardening. In the past two weeks since the latest IPv6 exploit was published, I received more questions about the practice. Who could argue with the "least privilege" dogma that underpins most of computer security? Well, after 20-plus years of giving hardening advice, I realised I was wrong. A few factors have changed -- for example, nearly every OS vendor now has very reasonable and relatively secure defaults. But there's more, including a startling personal realization: In general, there is very little evidence to support the case that a company tightening Windows beyond Microsoft's (my full-time employer) recommendations experiences any significant benefit. Yes, leaving unneeded services turned on may increase the possible attack surface, but good security is all about risk management and cost/benefit trade-off. Why disable a service or tighten a permission if it isn't attacked? Why expend the energy and increase your operational risk? As it stands, there are greater risks to worry about. I'm in the field 90 percent of the time helping clients fight off hackers, and all the attacks I see stem from client-side, social engineered Trojans or application data malformation. I've never seen (in real life) an attack made possible because an organization did not harden its defenses beyond the vendor's defaults or recommendations. It's always because the organization accidentally weakened some default setting it shouldn't have, ran socially engineered Trojans, or didn't follow advice that everyone has been promoting for 10 years, such as good patching, strong passwords, and so on. Many security practitioners want to disable unneeded services to decrease the risk of remote buffer overflows and the like. But since March 2003, there have only been a handful of truly remote buffer overflows in default Microsoft services. Most of the buffer overflows you read about are only considered "remotely" exploitable in that gaining access to inside resource from outside the network requires tricking an end-user into clicking on something. Most remote buffer overflows, especially the biggest ones, affected services that everyone was either required to run (such as RPC) or had to run because of needed functionality - that is, Web server, SQL, and so on. The three most successful attacks in the history of Microsoft Windows - Blaster on RPC, Code Red on IIS, and the SQL Slammer worm - demonstrate this. Note that these major exploits happened a long time ago, and in all three cases, vendor patches were available, sometimes for months, before the remote exploit hit. Some may argue Microsoft used to enable IIS on computers that didn't need it, along with SQL through SQL Desktop Edition. It's no longer the case and hasn't been for nearly a decade. The IIS of today is significantly hardened. It isn't installed anywhere by default, and when installed, it runs in a significantly hard-to-exploit default state. The latest versions of SQL haven't been exploited in years. When people ask me how to harden IIS or SQL, I usually reply, "Don't mess things up! The defaults are pretty darn good." There's another risk in hardening: Most people making changes don't know what they are doing, so disabling a seemingly unneeded service can often have unexpected outcomes. One of my favorites examples is users disabling the Printer Spooler service on Windows domain controllers. Unbeknownst to them, it disabled Active Directory's printer pruning capabilities. Even worse, in most hardening guides, I see horrible advice. The recommended practices are likely to cause problems and may, in fact, weaken security. I see it all the time, and very popular hardening guides are no exception. As I mentioned, the latest IPv6 exploit has prompted IT admins to question whether they should disable the service. First, they argue, it's hard to stop, and second, most of the world isn't using it. I tell them no: IPv6 is significantly more secure than IPv4 . Companies should be using and enabling it, not disabling it. Unless you're spending significant efforts trying to stop Layer 2 DHCP spoofing attacks, which impact nearly every computer in any company, you shouldn't expend your energies worrying about IPv6 lower layer attacks. Yes, IPv6 attacks will happen, just as they do with DHCP, but they aren't widespread. I don't want to overgeneralize. Disabling an unneeded service often makes sense in terms of the cost benefit. Maybe even disabling IPv6 plain works for your company. But shouldn't you wait to see if any IPv6 attacks are forming in the wild before you start dedicating time and resources to it? If you're trying to prevent IPv6 hacks, why not direct that time and energy to more likely invasions, such as client-side attacks? I'm also for anyone who wants to harden their own data security settings, as well as the applications and code they create. That's a no-brainer. What I'm talking about is all the effort spent hardening base OS or popular vendor applications that have already been security-examined and bolstered by the person who created it. If you're going to spend your time hardening your defenses, focus on the applications and areas that vendors haven't already reviewed or are clueless about security. I'm certainly far from the first person to reach this conclusion. My coworker Aaron Margosis has been writing about it for years . A few years ago, Jesper Johansson and Steve Riley argued the same in their book "Protect Your Windows Network: From Perimeter to Data." I chided them and wrote articles rejecting their "bad" advice. I'm pretty sure I was even partially personally angry with what they were saying. I was wrong. Sometimes it takes the passage of time to see that the other guys saw something clearer than I could, earlier than I could. Here's hoping that one of my future articles pass along the favour.