Google has a big advantage over competitors when it comes to pushing out patches for Chrome and other software products: The company can, by default, automatically update users' systems on Windows and Apple platforms. That's good for Google and for users in that it ensures people are running the newest, most secure version of the company's wares, which in turn helps to keep Google off top 10 lists of vendors with the most exploitable software. But Google seems to be the exception to the rule, and dealing with unpatched software remains a huge issue for the industry. According to Kaspersky Lab, for example, Adobe and Java software now accounts for all 10 of the most popular successful exploits. Yet most of the holes discovered in those offerings are patched relatively quickly after public disclosure; it's just that people aren't downloading the patches. According to Zscaler's latest "State of the Web" security report, for example, more than 56 percent of enterprise Adobe Reader users are running an outdated version. This trend is not overly different for many of the world's most popular applications. For example, according to Microsoft (my full-time employer), only 3 percent of Microsoft Office exploits targeted vulnerabilities that had been patched in the preceding year; put another away, 97 percent of exploits targeted vulnerabilities for which patches had been available for a year or more. Fifty-six percent of successful exploits were against systems that had not patched Office 2003 since the day it was installed; more than five years had gone by without a single patch. When I go over to friends' houses to help clean up malware, I almost always see hundreds of megabytes of patches begging to be installed, with apps sending pop-up messages asking if it's OK to install, only to have the user delay over and over again. My friends always ask, "Should I update this thing?" Uh, yes. These types of statistics and experiences probably makes you wonder why all the major vendors can't automatically update their software without end-user approval, like Google does with its Chrome browser and other products. (For clarification, Google Chrome only automatically updates on Windows and Apple platforms by default. Auto-updating can be managed or disabled. On Linux platforms, updates are covered using the normal update mechanisms.) The major answer is that any update from any vendor can potentially cause operational issues. If an update causes operational issues, there's a potential for a lawsuit. Microsoft was lambasted years ago for updating its automatic update mechanism, even though it caused no operational problems, was configurable, and warned the user performing the installation. It's true, to a degree, that if vendors better tested their patches, users wouldn't be scared to automatically accept updates. But in a world where there are millions of customized applications and hundreds of thousands of different hardware components, no vendor can perform 100 percent comprehensive compatibility testing. Once, after Microsoft discovered a critical internal bug affecting services, I mentioned a way to fix the hole on an internal forum. Someone did the research and agreed that my suggested fix would close the hole, but it would cause operational problems with 1 percent of customized applications. I said, "Great, let's do it!" My colleagues replied, "We would fire you first" -- because 1 percent of Windows applications accounts for a lot of pissed-off customers. Until that moment, I didn't realize how strict backward-compatibility testing was. Crazy though it may sound, a company can face a backlash for rolling out patches that are incompatible with popular malware. Microsoft has had more than a few application and software updates that crashed a moderate number of computers because they were infected with malware. The blogosphere went wild, and trade publications featured article after article discussing Microsoft's update and how it crashed computers around the world, along with quotes from disgruntled customers. It's so bad that Microsoft now checks for popular malware prior to applying some of its updates and patches. Vendors may make software patches better, but they'll never be perfect. For that reason, many admins and end-users choose not to apply patches in a timely manner. Most vendors recommend thorough testing patches before applying them. Some organizations do this, though some do it too well, spending weeks to months before the latest patches are applied. Many other users simply wait a few days to a few weeks to see if any major problems are reported by earlier adopters. And a significant portion of the population simply never applies patches - ever. Many people think the SaaS cloud paradigm will change all of this. The traditional idea is that updating will be frequent and invisible, because the vendor can update their centralized software and every end-user will be immediately updated, too. Not so fast. Numerous cloud vendors are telling customers they can run a version or two behind and select when to start using the latest iteration. Again, this is for operational and, I assume, training reasons. Updating in the cloud should certainly make patching easier to accomplish, but I sadly suspect some of the old patching habits will still be extended in the new world. To be honest, I'm jealous of Google's default, automatic, silent updating. How does the company get away with it when nearly every other major vendor defaults to end-user or admin approval first? What is the company's secret? Higher-risk tolerance? A stronger-written EULA? And if Google Chrome secures huge market share and is relied upon in production environments, will maintain its install-first, ask-questions later update policies? These are good questions to ask, because our current patching policies aren't working. We need to do something else. I'm sure all vendors would love to be able to force customers to update quicker. It's more secure, less frustrating in the end, and would lead to lower support costs (because fewer versions would need to be supported).