Stories by Roger A Grimes

Opinion: Get real about your security risks

There was a time when the IT security department had the only say in approving or denying operational requests. It made for an easier, more secure place to work - but there was no real communication with business development and operations to determine which actions were, in fact, worth risking in the name of achieving business goals.
These days, senior management and the risk management department are increasingly in charge of the final decision as to how much risk is acceptable for a given operation, from requiring near-perfect safety to accepting absolutely open operations. IT security's task is now to analyze particular pursuit for threats and risks, list mitigations (and risk acceptance), and perhaps offer recommendations. IT security shouldn't be making the ultimate call on risks. To be honest, I like it that way. Why be responsible for more than you have to be?
The challenge has been for management to understand and accept the reality that there's almost always a chance of risk. After all, it's tough to predict unknown unknowns. I doubt, for example, that decision makers at organizations such as Sony, RSA, and the U.S. Army understood that leaving computers unpatched or allowing end-users to click anything they wanted would likely to lead to reputational compromises costing hundreds of millions of dollars.
Yet some CEOs or boards of directors either aren't prepared to hear about the potential costs of an attack or of implementing perfect security. The best organizations, by contrast, understand that reputational cyber attacks are likely to happen in the future - thus, they don't shoot the messenger. IT security departments need to feel confident and secure in being able to deliver the potentially bad news as accurately as possible.
Figuring out the probability of a particular risk occurring requires first acknowledging that it is likely to happen. If the likeliness is truly low, then it's an easy probability to plug in to your equation: 0.0 percent. If it is likely to happen in the future but you don't have historical measurements on which to base future estimations, start with a long time range and work backward.
For instance, would a particular security event be likely to happen in the next 30 years, even given everything the company is doing to prevent such an occurrence? If the answer is yes, you have a baseline to work with: 1 out of 30 years, or 3.33 percent. For large security incidents, such as reputational events, I'd at least go with this baseline. If you're a big company or a larger target or you lack the commitment and resources for such a big fight, maybe one incident every 5 to 10 years is more realistic.
Smaller events, such as malware infections, exploited servers, and availability issues, should be easier to base on historical evidence. When in doubt, bear in mind that these milestones are often measured in years versus decades.
Also note that events, large and small, aren't mutually exclusive. A small incident might lead to a reputational event, but since you don't know when that is likely to happen, you have to account for both. I get the distinct feeling that organizations handle the small items fairly well, but they don't account for the reputational-level happenings.
Beyond underestimating the probability of a security event, there's a tendency to underestimate the likely resulting damage. These costs can be just as difficult to calculate. Again, I'd start with broad ranges to help develop boundaries. For instance, how likely is a security event to result in $100 million worth of damage? At billion-dollar companies, that's a likely outcome over a 30-year period - and should be accounted for as such.
IT security departments are no longer the gatekeeper, but perhaps we haven't done a good job of sharing the realistic likelihoods and probabilities. Heck, the last few years have been a wake-up call for us all, reminding us we need to evolve.

Typosquatting hacks: Finger slips sink ships

For nearly as long as DNS as has been around, aggressive advertisers and malicious doers have used a technique called typosquatting to take advantage of the fact that most of us aren't perfect typists: They buy up domains and set up realistic-looking yet malicious websites such as www.livve.com, www.live.cm, and www.liv.ecom to exploit users who incorrectly type live.com.
I've considered typosquatting more of a nuisance than anything. The risk it poses isn't nearly as high as that of other pressing threats, such as unpatched vulnerabilities and fake antivirus scams. However, a new typosquatting vector has emerged that warrants warning: Researchers at security think tank Godai Group found that through typosquatting tactics, they were able to dupe people into sending them legitimate, private emails intended for Fortune 500 email servers.
The researchers set up their own email servers using various typosquatting, also known as doppelganger domains. Unwitting users then sent legitimate email to these domains, most likely unaware of their mistake. According to the final report, "During a six–month span, over 120,000 individual emails (or 20GB of data) were collected, which included trade secrets, business invoices, employee PII, network diagrams, usernames and passwords, etc."
If -- or rather, when -- an employee, partner, or customer types your email domain name incorrectly when sending a message, it is possible for the owner of a doppelganger domain to intercept it. The sender won't even receive a rejection message. A savvy squatter could send a plausible, convincing-looking response to further allay the sender's suspicions. I'm not sure that even I would be suspicious, and I've been in the IT security business for 20 years.
The authors even detailed how to perform a man-in-the-middle email attack, such that the sender and the intended recipient are essentially unaware of the plot. In a nutshell, the typosquatter sets up two bogus domains: one of the sender's domain and one of the receiver's. When the sender emails a message to the receive, the squatter can intercept it (assuming it isn't protected using S/MIME or some other protection method), read it, then forward it on to the intended recipient's domain using the bogus version of the sender's domain. The receiver might not notice the misspelled domain name in sender address. Again, I'm not sure I would.
This form of typosquatting attack hasn't been widely adopted yet, but the researchers have convincingly demonstrated it's a viable tact. More than likely, it is already being used in the real world. In fact, I'd bet that corporate-espionage types have been using email typosquatting for a long time. Why wouldn't they? The researcher hit a gold mine of confidential information in a few months of testing. The author also noted that several "doppleganger domains" are already registered to China, a hot bed of APT (advanced persistent threat) activity.
As is generally the case, organisations can take steps to defend themselves. One common tactic is for a company to register as many domains as possible that are potential typosquatting targets. The best defense is a good offense.
Second, it can't hurt to include information about this threat in the email security section of your end-user education documentation.
Third (I got this idea from a client), consider using an outbound/inbound internet proxy that automatically forbids or detains network traffic sent from sources that are unrecognised or unranked by proxy content-subscription ranking services. I was skeptical of this client's approach at first, but he reports a very high success rate with a very low record of false positives.
Readers, what other ideas do you have to combat typosquatting?

Opinion: Why hackers don't need to be smart

Online, in print, on TV, and on the radio, report after report claims that malicious hacking is "more sophisticated than ever before." The media seemingly wants the world to believe it's besought by impossible-to-stop uberhackers with supersophisticated tools and skills.
The reality is far different: Malicious hackers are using pretty much the same old tools and exploiting the same old weaknesses. However, companies and end-users aren't doing what they need to defend themselves. Anyone who promotes today's attackers and their tools as near-invincible is doing a serious public disservice.
Attackers' strategies and techniques have not changed since computers were invented: malware, buffer overflows, social engineering, password-cracking, and so on. With very few exceptions (such as dynamic botnets), nothing has changed - except for the fact that the intruders are doing more with the access they get.
For example, there's a new rootkit called Mebromi that modifies computer motherboard BIOs to make detection and removal more difficult. That's slightly interesting - but not new: The CIH virus did this quite successfully in 1998. Malware that encrypts data and holds it hostage for payment always makes headlines. The AIDS Trojan horse programdid this in 1989.
The most common ways of compromising servers - application exploits and SQL injection - are more than 10 years old. Even the most popular end-user attacks - fake antivirus programs and exploits of unpatched programs - have been around forever. The first fake antivirus program appeared in 1989 and masqueraded as McAfee software. John McAfee started using digitally signed programs shortly after, and the rest of the online software industry followed suit.
It's not too surprising that the bad guys are reusing the same ol' tactics and technologies: Why come up with new ways to hack when the old ways work just fine? Organizations that want to make their environment significantly more secure should be doing the following better: patching systems regularly; creating and enforcing password policies; embracing configuration management; adopting a least-privilege strategy; and training end-users.
You don't need ultrasophisticated defences. Defending against malicious intruders is not impossible, but you must concentrate on doing the basics better.
Improving some defenses require global coordination, such as making it harder to carry out malicious deeds across the Internet. But even those issues haven't changed in 20 years. The only difference is that we now have the expertise and protocols to implement what we've needed all along to keep our systems safe - but we don't. One day we will; unfortunately, it will happen only after we've allowed the cyber crime issue to harm far more people than necessary.
Until we make it globally harder for the bad people to do bad things across the internet, your organisations needs to better embrace the basics to keep your own systems safe. In the meantime, don't get caught up in the hype.

The Morto worm threat: Use it to improve your security

The recent discovery of Morto, the RDP password-guessing worm, provides a great opportunity to revisit the importance of fine-tuning your organisation's defensive strategies. Morto, after all, doesn't simply exploit an unpatched software vulnerability; it employs multivector attacks, tricking users into downloading it, then using authentication guessing to break into accounts. IT admins need to be prepared to identify and defend against these sorts of multipronged threats.
For example, readers who've focused on Morto's interesting RDP usage and password guessing might be missing the bigger lessons. The worm is getting around because users are being tricked (yet again) into running something they shouldn't. That certainly exemplifies the need to improve user education at your own company - and opens a host of other questions about your security.
My challenge to all admins is to look beyond the acute problem (in this case, computers exploited by Morto) and look at the strategic reasons why computers under your control became infected. When you find causative agents, are you responding most effectively? If you don't address the specific threat with a specific, best defense, you can't expect improvement.
For example, what if your network became infected, not by one of your own users but via a third party's connected network? Further, are your firewall rules set correctly, or do you allow RDP connections from any computer to any computer, even if it is unneeded? Are admin-level accounts left with their default logon names? Are there poorly protected passwords? Are users with admin-equivalent rights opening internet links? Morto writes to system-protected areas and would not succeed if the infected user was not running an elevated account at the same time as they opened the link.
All IT departments should be consciously aware of how their environments are being exploited. They shouldn't care about malware family names, country of origination, or the users involved. But they should know the top 10 threats and your plan to address them. Everyone should know how the environment is most often exploited and work cohesively as a team to fight the biggest risks first.
Consider the Conficker worm: It had multiple means of attacking computers. Early on, most observers thought Conficker's biggest threat was against unpatched systems. But in the field, I saw many of my clients affected by the worm, though their systems were appropriately patched. I determined this probably meant Conficker was successfully propagating via infected USB keys, a conclusion that Microsoft (my full-time employer) reached as well.
In response, Microsoft issued a security patch that disabled the autorun functionality, which led to millions of fewer instances of malware infections from Conficker and other autorunning malware. Some antivirus software vendors have questioned just how successful the fix was, but regardless of the specific numbers (estimates of the decrease range from 15 to 75 percent), one strategic decision led to a significant dip in malware risk.
Identifying and responding to multivector threats means being aware of them early on. Is your IT security infrastructure strategically defined to measure root-cause analysis and create the necessary data to respond with better, fine-tuned responses? Or does it rely upon a few humans noticing a trend and hoping their personal speculations will filter up to decision makers who might notice the significance and respond accordingly?
Instead of hoping, design into your system a proactive early-warning telemetry. When the next major malware or hacking trend occurs, such as a boot virus, macro virus, email scripting worm, fake AV program, autorun malware, or more, be better prepared to notice and, better yet, respond more quickly.
We don't do a good job at that in IT security. Imagine if a warring military unit noticed where it was taking on the most casualties and didn't respond to close the hole. That unit would lose the battle. That's exactly what we're doing over and over -- it's time to fight a better war.
Grimes is contributing editor at the InfoWorld test centre and a security architect for Microsoft’s InfoSec ACE Team

Certificate hacks: PKI didn't fail us, humans did

With the high likelihood that GlobalSign has been hacked, this brings to at least three the number of popular public PKI certification authorities (CAs) attacked in recent months by a single hacker. The other CAs are Comodo and DigiNotar.
The computer security world is aflutter because hundreds of bogus digital certificates have been issued. "It's a massive failure of PKI," they say. "It proves that there's too much trust spread around," say others.
But it's hard for me to get worked up about any public CA or PKI compromise. Here's why: Almost nobody pays serious attention to digital certificate warning messages in the first place.
I've yet to see the person who, when presented with a certificate error, didn't continue on and visit the website they were trying to access. Most users are simply annoyed by digital certificate warning messages. How dare they get in the way of a quick-loading Web page!
It's not just mom and granddad who are ignoring digital certificate warnings. A few years ago, a survey revealed that the more users knew about digital certificates and PKI, the more likely they were to ignore the warnings.
Part of the problem is that for as long as public PKI has been in existence -- nearly two decades -- it has tended to be implemented poorly. Websites with SSL certificates are notorious for having mistakes in their certificates. Mostly they have incorrect host names, where the subject name does not match the host name being contacted -- but certificates are often expired or have other x.509 mistakes. I attended a Black Hat Las Vegas 2010 conference on the subject where Ivan Ristic, directory of engineering at Qualys, revealed that the majority of websites using SSL certificates had errors.
Qualys found 22.65 million SSL-enabled websites and hosts on the Internet (out of hundreds of millions of websites). Only 720,000 had SSL certificates with a valid name match. Only 28 percent of the most popular SSL websites had a proper name, although 70 percent had digital certificates that were linked to a trusted CA. That's good. But 28 percent were untrusted, and 4 percent had trust chains that could not be verified.
Moreover, Qualys said more than 2 percent of the 22.65 million sites were suspicious. More than 137,000 certs were expired, 96,000 were self-signed, and more than 1,000 were revoked (but still being used). Twenty-one thousand had invalid digital signatures, and more than 57,000 had unknown CAs. Ninety-nine digital certificates had known bad keys left over from the Debian random number generator vulnerability, which was found and fixed more than a year before.
I'm sure that these statistics have improved over the last year, but if only 3 percent of SSL-enabled sites (720,000 divided by 22.65 million) had a correct and valid SSL certificate (including only 28 percent of popular websites), can we really ask end-users to rely on public PKI?
Don't get me wrong: I'm sad anytime I hear that a CA is hacked. CAs have heavy, tight security around the digital certificates that can issue other certificates. Most are protected by hardware security modules (HSMs), which usually require smart cards, USB tokens, or some other physical security device. In fact, it usually takes multiple physical tokens (each attached to different people) in order to access the important digital certificates. HSMs should be used by any company with a PKI, but especially by CAs.
The Comodo hacker referenced above talks about being thwarted by an HSM. My guess is that the other compromised CAs were either not using HSMs or were not using them appropriately.
The bottom line is that PKI didn't fail us. Its mathematical beauty and potential assurance is something rare in the computer security world. If run correctly, it would greatly benefit our online world. But as with most ongoing security risks, human nature ruins the promise.

Opinion: You're only as secure as your business partners

The successful hack attacks on RSA and Sony have served as wake-up calls to the world's CEOs. Both attacks, aptly dubbed "reputational events," have resulted in hundreds of millions -- potentially billions -- of dollars in lost revenue. Restoring a company's good reputation after these types of incidents is not easy; sometimes it's impossible.
Almost every company could be owned just as RSA and Sony were, even firms that embrace the security best practices I've advocated for the past 20 years, including better end-user education, faster and more inclusive patching, stronger authentication, improved monitoring, and quicker response to incidents. Of course, my regular readers have been taken all these important measures for a long time -- but how about your partners? If they haven't, they might well be putting your organization at risk.
Most companies have a few to dozens of interconnected partners and vendors that have access, sometimes at the admin level, to their network and computers. By that definition, any vendor's network should be considered an extension of your own. Thus, if I'm a dedicated hacker and I know you have lots of vendors and partners, I'm attacking the weakest link in the chain.
The dedicated RSA attackers compromised the company to ultimately hack its customers. Many of us have had our networks attacked by malware due to visiting vendor's infected laptop or USB key. Much of the data lost over the past decade can be traced back to the partners who were entrusted to safeguard the data.
My first word of advice: Ask your partners and vendors whether they maintain the same level of security as you do, if not better. More important, make them prove it. Don't simply ask them to read your security policies and agree to abide by them, especially not just as a paperwork formality that everyone must undergo in order to work together.
A good starting point is to interview the vendor or partner and ask about the company's security policies, computers, and networks. An interview is no substitute for auditing, but as long as the partner is being honest, you can ascertain the company's security maturity.
However, nothing beats a physical audit where you are allowed to scrutinise the potential vendor's or partner's computers and networks to verify its security practices. When I've conducted an audit, I've always discovered security risks that the company was either unaware of or did not share. If possible, secure the right to conduct security-policy reviews and the ability to do some limited auditing to assure the third party is following expected policy before you allow them access on your network. At the most security-minded organisations, security policies state that network access will be rejected if the third party does not meet a minimum level of security.
How does your company's security policy treat third parties? The answer has quick insight to how the company treats its own security.

Opinion: Google's stealth updates - why no one else gets away with it

Google has a big advantage over competitors when it comes to pushing out patches for Chrome and other software products: The company can, by default, automatically update users' systems on Windows and Apple platforms. That's good for Google and for users in that it ensures people are running the newest, most secure version of the company's wares, which in turn helps to keep Google off top 10 lists of vendors with the most exploitable software. But Google seems to be the exception to the rule, and dealing with unpatched software remains a huge issue for the industry.
According to Kaspersky Lab, for example, Adobe and Java software now accounts for all 10 of the most popular successful exploits. Yet most of the holes discovered in those offerings are patched relatively quickly after public disclosure; it's just that people aren't downloading the patches. According to Zscaler's latest "State of the Web" security report, for example, more than 56 percent of enterprise Adobe Reader users are running an outdated version. This trend is not overly different for many of the world's most popular applications.
For example, according to Microsoft (my full-time employer), only 3 percent of Microsoft Office exploits targeted vulnerabilities that had been patched in the preceding year; put another away, 97 percent of exploits targeted vulnerabilities for which patches had been available for a year or more. Fifty-six percent of successful exploits were against systems that had not patched Office 2003 since the day it was installed; more than five years had gone by without a single patch.
When I go over to friends' houses to help clean up malware, I almost always see hundreds of megabytes of patches begging to be installed, with apps sending pop-up messages asking if it's OK to install, only to have the user delay over and over again. My friends always ask, "Should I update this thing?" Uh, yes.
These types of statistics and experiences probably makes you wonder why all the major vendors can't automatically update their software without end-user approval, like Google does with its Chrome browser and other products. (For clarification, Google Chrome only automatically updates on Windows and Apple platforms by default. Auto-updating can be managed or disabled. On Linux platforms, updates are covered using the normal update mechanisms.)
The major answer is that any update from any vendor can potentially cause operational issues. If an update causes operational issues, there's a potential for a lawsuit. Microsoft was lambasted years ago for updating its automatic update mechanism, even though it caused no operational problems, was configurable, and warned the user performing the installation.
It's true, to a degree, that if vendors better tested their patches, users wouldn't be scared to automatically accept updates. But in a world where there are millions of customized applications and hundreds of thousands of different hardware components, no vendor can perform 100 percent comprehensive compatibility testing.
Once, after Microsoft discovered a critical internal bug affecting services, I mentioned a way to fix the hole on an internal forum. Someone did the research and agreed that my suggested fix would close the hole, but it would cause operational problems with 1 percent of customized applications. I said, "Great, let's do it!" My colleagues replied, "We would fire you first" -- because 1 percent of Windows applications accounts for a lot of pissed-off customers. Until that moment, I didn't realize how strict backward-compatibility testing was.
Crazy though it may sound, a company can face a backlash for rolling out patches that are incompatible with popular malware. Microsoft has had more than a few application and software updates that crashed a moderate number of computers because they were infected with malware. The blogosphere went wild, and trade publications featured article after article discussing Microsoft's update and how it crashed computers around the world, along with quotes from disgruntled customers. It's so bad that Microsoft now checks for popular malware prior to applying some of its updates and patches.
Vendors may make software patches better, but they'll never be perfect. For that reason, many admins and end-users choose not to apply patches in a timely manner. Most vendors recommend thorough testing patches before applying them. Some organizations do this, though some do it too well, spending weeks to months before the latest patches are applied. Many other users simply wait a few days to a few weeks to see if any major problems are reported by earlier adopters. And a significant portion of the population simply never applies patches - ever.
Many people think the SaaS cloud paradigm will change all of this. The traditional idea is that updating will be frequent and invisible, because the vendor can update their centralized software and every end-user will be immediately updated, too. Not so fast.
Numerous cloud vendors are telling customers they can run a version or two behind and select when to start using the latest iteration. Again, this is for operational and, I assume, training reasons. Updating in the cloud should certainly make patching easier to accomplish, but I sadly suspect some of the old patching habits will still be extended in the new world.
To be honest, I'm jealous of Google's default, automatic, silent updating. How does the company get away with it when nearly every other major vendor defaults to end-user or admin approval first? What is the company's secret? Higher-risk tolerance? A stronger-written EULA? And if Google Chrome secures huge market share and is relied upon in production environments, will maintain its install-first, ask-questions later update policies?
These are good questions to ask, because our current patching policies aren't working. We need to do something else. I'm sure all vendors would love to be able to force customers to update quicker. It's more secure, less frustrating in the end, and would lead to lower support costs (because fewer versions would need to be supported).

Analysis: no contest - Mac vs Windows security

For nearly two decades now, security experts have debated whether Microsoft or Apple offers superior security. The battle heated up again in the wake of news out of Black Hat about a newfound weakness in the Mac platform. However, the question of whether Microsoft or Apple is more secure is no longer even relevant: Security threats of today and tomorrow aren't as tied to specific desktop platforms as they once were.
Macs have far more theoretical vulnerabilities than Windows machines, as I wrote last week. (I am a full-time principal security analyst at Microsoft.) It's been that way for a long time. However, Macs are attacked far less because they are used less than machines running Windows. Call it security through obscurity. Now that Macs are increasing in popularity in the enterprise and beyond, though, they're no doubt on the cusp of being targeted by hackers. However, I predict that Apple will rise to the occasion and fill the vulnerability gaps. It has to, or growth will slow.
Still, the question of whether Mac or Windows is more secure is no longer relevant. The computer security paradigm is shifting at this very moment. Cloud computing, Web 2.0, and mobile technologies are exploding, and with those changes, traditional attacks are making way for a new crop that ignore platforms. Think ANSI bombs, boot sector infectors, macro viruses -- seen any of those lately?
I worry about the risks associated with cloud compromises more and more. For example, if someone compromises a public cloud product and takes over one customer's instance, how easy would it be for that person to get to all the cloud's data? I know hackers have a far easier time taking over multiple websites hosted on a single web server than they would taking over sites hosted in separate machines. Whether you're a Mac or a Windows shop doesn't factor into the equation.
Default data syncing, too, is becoming a fact of life, and it opens new potential security holes, regardless of platform. The mere act of opening a document on any computer or device could automatically send a copy of that document into the cloud, regardless of your intention. Is it well protected in the cloud? If you then open a document on your least secure device, can that machine access all your synced cloud documents? Who else in the cloud can see my documents?
How does IT manage security when it can manage only a few of the devices connecting to the most valuable data? How long until we have our first XML-written virus or worm? If someone compromises my worldwide, biometric ID, how do I repudiate everywhere it might be used and how can I use something else? For example, if my logon is my fingerprint or face, and the attackers steal my authentication token and fake being me, how can I get it back? What will I use instead?
Users, too, remain a huge security threat, regardless of what OS they're running. People remain susceptible to sophisticated phishing and social engineering attacks that dupe them into giving up their credentials, for example. They continue to install programs they shouldn't on their machines, allowing hackers an opportunity to pounce.
Heck, my own kids have a verifiable computer security expert in their house, yet they couldn't care less about computer security in their daily lives. They haven't changed their Facebook or online banking passwords since they set them -- again, they're leaving themselves susceptible to attacks regardless of what platform they might be using.
So when I'm asked if Microsoft or Apple's security is better than the other, it's not a question even worth answering. Overall, computer security is pretty bad. Nearly any company can be hacked, with just a little research and know-how. Fake malicious programs still abound. Antivirus software is struggling like never before. Most people have had their identity and credit card information compromised several times over the last few years. Most people have had their computers infected over the same period.
Our computer security paradigm is shifting in a huge way before our eyes and we're not using our best defenses while we argue over the relative minutiae of the competing platforms' relative security. Meanwhile, we're taking casualties with more to come -- all the while wondering why our current strategy doesn't work.
It reminds me of the English redcoat soldiers sent to the United States to take it back under control from the treasonous terrorists (we now call them the founding fathers and patriots). The redcoats kept lining up in the same parallel lines that had been successful for a millennium, and they kept that strategy until the bitter end. The war changed around them and they didn't notice in time. Will we?

Analysis: CSA helps clear up cloud security questions

Uncertainty about cloud-service security is among the biggest barriers to adoption in the business world. Verifying a cloud service's security is tough, especially because cloud providers are hesitant to reveal details - and understandably so.
Fortunately, a group called the Cloud Security Alliance (CSA) has emerged to help alleviate would-be customers concerns, and it's becoming the de facto standard for cloud security guidance for service providers, users, and auditors.
Trust us, we're secure

Analysis: Spammers exploit the Google cloud to dupe victims

Spammers have been exploiting cloudlike products for years to send spam - think Hotmail or Gmail. But now they're taking greater advantage of cloud computing, employing techniques and traversing avenues we haven't seen before.
Among the many cloud services being abused are Google's popular offerings, including Google Docs and Google+. Users and organizations alike need to be aware of these threats and prepare accordingly.
Phishers are using Google Docs to trick users in revealing confidential information. This attack method works as follows: Phishers create forms to collect and summarize data in Google Spreadsheets and Docs. These forms, which phishers design to look as though they come from a legitimate third-party domain, such as a bank, provide places for victims to enter personal identification and log-on information.
Using built-in form functionality, phishers send email message to a list of prospective targets. The message contains a simple URL linking to the form. One giveaway that you're looking at a potential phishing form and not a trusted site is a URL that takes you to a spreadsheet.google.com address, containing the command word "formkey" at the end, follow by an equal sign and the form's randomly generated identifier link. Often the forms are protected by HTTPS, so it's difficult for organisations to intercept or inspect them.
Once a user fills out a form, his or her information is saved to the originator for easy viewing and sharing a detail that spammers especially enjoy.
You can find tons of phishing samples by doing an Internet search on the terms "inurl:formkey password site:spreadsheets.google.com," where the term "password" can be replaced by any term you think the phisher may include in the phishing form.
Many schools and universities use Google Docs, so these sorts of phishing attacks have disproportionately targeted the educational sector. Even if administrators wanted to block Google Docs spreadsheet forms, they can't. Their schools and businesses are often running on Google Docs, and right now it's difficult to separate the good from the bad.
Google includes a Report Abuse link on every displayed form, but it takes time to respond, verify, and deny future access to the form. In that interlude, thousands of more victims may have been tricked into providing their confidential information.
The new Google+ service is already being used by spammer. In this case, the criminals aren't using Google's service at all; they are simply crafting very realistic Google+ invitations that, if clicked, will take the unsuspecting victim elsewhere. Part of what makes Google+ frauds easier to pull off is that both the real and fraudulent emails come from no-reply sender email addresses. This means that spammers don't even have to take the additional step of sending from a valid email address.
Many readers are probably already aware of these new spamming and phishing attacks, but I bet many others aren't. Consider this your wake-up call that a new attack paradigm is out there, and vendor defenses either aren't in place yet or aren't very sophisticated. Right now, until our traditional antispam and antiphishing tools come up to date on these avenues of attack, we defenders are left with our own homegrown custom protection and end-user education.
The phishing war moves on. Are you prepared?

Opinion: Sorry, but the TDL botnet is not 'indestructible'

The sophistication of the TDL rootkit and the global expanse of its botnet have many observers worried about the antimalware industry's ability to respond. Clearly, the TDL malware family is designed to be difficult to detect and remove. Several respected security researchers have gone so far as to say that the TDL botnet, composed of millions of TDL-infected PCs, is "practically indestructible."
As a 24-year veteran of the malware wars, I can safely tell you that no threat has appeared that the antimalware industry and OS vendors did not successfully respond to. It may take months or years to kill off something, but eventually the good guys get it right.
This isn't the first time we're supposed to be scared of MBR (master boot record)-infecting malware. In 1987, well before the days of the Internet, the Stoned boot virus infected millions of PCs around the world. Subsequent "improvements" in hacking allowed malware authors to create DOS viruses that could manipulate the operating system to hide themselves from prying eyes. (Actually, the first IBM PC virus, Pakistani Brain did this in 1986, too.) Computer viruses became encrypted and polymorphic, and they started taking data hostage.
With each ratcheting iteration of new malware offense, you had analysts and doomsayers predicting this or that particular malware program would be difficult to impossible to defend against. But each time the antimalware industry and other software vendors responded to defang the latest threat. Yesterday's indestructible virus became tomorrow's historical footnote.
Even today's malware masterpiece, Stuxnet — as perfect as it is for its intended military job — could be neutralised if it became superpopular. Luckily, military-grade worms are few and far between, so most users don't have to suffer while waiting for defenses to be developed.
The truth is, like every other malware family variant, TDL and its botnet will probably be around for years to exploit millions of additional PCs. But it didn't take an advanced superbot to do that. Take a look at any monthly WildList tally. It always contains malware programs written years ago.
Today, almost every malware program lives in perpetuity, dying off only when the exploited program or process dies with it. Boot viruses from the 1980s and 1990s didn't stop being a threat until floppy disks and disk drives went away. Macro viruses didn't die until people stopped writing macros and Microsoft Office disabled automacros by default.
No, what really bothers me more are the malware programs that do something completely new because it takes so much longer for antimalware programs, software vendors, and users to adapt to the tactic. For instance, it took us years to teach folks not to open every file attachment to defeat email viruses and worms — but it takes the bad guys only a few minutes to change strategies. Today, we need to tell folks not to click on the Internet link emailed to them by a trusted friend and not to install random applications sent to them in Facebook or through their mobile phone.
But our biggest threat is an MBR PC-infector? Been there, done that.

Opinion: In the IT security world, policies and controls are king

Over a decade ago, Stephen Northcutt, one of the original founders of the SANS Institute, recruited me to help plan a course purely about security policies and procedures. At the time, I was all about hands-on hacking and defending, and I saw little value in a course purely focused on "paperwork."
It took me a long time to realise that without the paperwork, you don't get any real security.
Almost all security professionals can secure their own computers by tightening down the right settings, applying all the needed patches, properly configuring the firewall, and making sure their antivirus definitions are up to date. The challenge is doing that for hundreds or thousands of machines — PCs, laptops, servers, mobile devices — running different applications or platforms. Documenting and enforcing policies and controls is necessary for us to apply all the good advice in our heads to all the machines that we control.
You could even implement the best security possible across a large number of computers to the point of perfection in a particular moment in time. But without policies and controls, that perfection won't last long. It took me years of real-life experiences to learn that policies and controls are king. The technical pros are the fiefs and knights.
If your organization is behind on written policies, look to SANS: It continues to be one of my favorite resources for all manner of security information, including guidance and resources on the paperwork side of things. For instance, SANS recently released its top 20 Critical Security Controls for review.
As expected, it's par excellence, mostly because of how comprehensive it is: Both knights and kings were clearly involved. Each control has many specific "quick win" recommendations. Some are more detailed than others, but they all should be part of any computer security defense. I encourage defenders to take a look to see what you can learn from it.
Here's the summary list:
Inventory of Authorized and Unauthorized Devices
Inventory of Authorized and Unauthorized Software
Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers
Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
Boundary Defense
Maintenance, Monitoring, and Analysis of Security Audit Logs
Application Software Security
Controlled Use of Administrative Privileges
Controlled Access Based on the Need to Know
Continuous Vulnerability Assessment and Remediation
Account Monitoring and Control
Malware Defenses
Limitation and Control of Network Ports, Protocols, and Services
Wireless Device Control
Data Loss Prevention
Secure Network Engineering
Penetration Tests and Red Team Exercises
Incident Response Capability
Data Recovery Capability
Security Skills Assessment and Appropriate Training to Fill Gaps
I encourage those interested to read the large PDF version of the document.
Also, I recommend that anyone running the security defenses at an IT shop take a look at the control recommendations and note where his or her organisation's policies, procedures, and implementations have gaps.
The list is not ranked in order by priority. You would first have to determine what your organization's risk are, decide what is not being optimally addressed, and then go about fixing the gaps. For instance, in most companies the biggest risk leading to the most compromises is end-users installing things that they shouldn't, such as malware. Controls under the umbrellas of Malware Defenses and Controlled User of Administrative Privileges are the ones most likely to appropriately address those related problems. When you have end-users installing fake antivirus programs, boundary defenses, and more, secure network engineering isn't going to get you a lot of bang for your buck.
I especially like that the controls include inventories. I'm surprised by how many IT shops have no idea what software and hardware is used within their environment, especially the unmanaged components. The only other inventory item I would add is data inventory. All the controls we are mentioning are to manage the data, and you can't implement the Data Loss Prevention control if you don't know where the data is.
Again, I encourage computer security defenders to download and review the bigger document. You will improve your ideas — you won't be able to help it.

Opinion: Make your mark by stopping hackers

I remember being excited when I was asked to use a sledgehammer to tear down a covered garage that wasn't approved by the city. It had been standing beside my girlfriend's house for years. You could tell it was built intelligently and with love. The supporting beams were twice as thick as required by code, and every nail and screw was driven straight. The lumber itself was top shelf, not a knot or bend in it.
I have a hard time driving a nail straight — yet it took me less than an hour to turn the structure into a crumpled pile of lumber. In the security world, something similar happens every day when hackers tear down whole networks and systems.
In reality, hacking is easy once you know what you're doing. Defending is hard. If you want to truly impress the world, develop systems and applications that will be used by a lot of people while being resistant to easy hacking. Anyone can knock down a garage. But build one that can't be taken down by a blockhead swinging a heavy sledgehammer, and you've done something.
Hacking is all too easy

Opinion: We're doomed to insecurity in the cloud

Working in the IT security field, you spend every waking hour striving to improve protection and lower risk. Then another computing technology emerges - the internet, wireless networking, mobile computing, social networking, and so on - and you have to learn every security lesson all over, as if something new and surprising has come along.
In the past few weeks, we've seen authentication token leaks from Facebook; a rise in mobile malware; major networks running without a firewall and with unpatched major software; and an array of security appliance vulnerabilities. Secunia, which doesn't track every software product, is still publishing 250 to 350 vulnerabilities announcements per week. Some of the exploited technologies may be relatively new, but in terms of security, it's really more of the same.
Now some people think cloud computing and thin clients will decrease security risks and usher in an age of fewer exploits. I'm not so hopeful.
Thin clients have the potential to be less exploitable, simply because they have fewer lines of code, which should in turn mean fewer bugs, fewer security vulnerabilities, and less attack surface. However, thin clients rely on browsers to do the heavy lifting -- and browsers are the most exploitable pieces of software ever created.
Many readers might still think that Microsoft (my full-time employer) has the most vulnerable browser on the market in Internet Explorer. Surprise, surprise -- every major vendor that has tried to make a significantly less vulnerable browser has failed. Chrome, Firefox, and Safari have vulnerabilities numbering in the hundreds -- far more than Internet Explorer in the same time periods. It turns out making a truly secure browser is harder than it looks.
Further, the forthcoming thin client OSes use these same browsers to do most of the end-user work. How can we expect an entire OS platform to be more secure if the major single application they rely on has hundreds of bugs?
One good argument could be that these forthcoming client computers will have less functionality. They won't allow users to save files (or even states) locally. If the end-user can't save to their machines, it's going to be a lot tougher for malware writers and hackers to manipulate those computers, right? Probably not.
First, just as users aren't supposed to care where their data or profiles are located, malware writers won't care either. Wherever you are allowed to write data, the bad guy will follow. It's merely a change in locale, and as bank robbers break into banks because that's where the money is, the same principle applies here.
Second, I'm already hearing hedges. For example, end-users are asking how they will be able to work on their data files when they aren't connected to the Internet or the vendor cloud. The thin client vendors are replying that the users can work with a locally cached copy while offline. Get that? Users can't save files to their computer, but their computer will save cached copies locally. What's the difference between that computing model and the current PC model? Not much.
It gets worse. Users are going to object to binary security models with thin clients just as they do on PCs and the Internet. As different platforms become more popular, the vendors will be forced to offer more functionality and more granular security. All of that means these devices will likely become as insecure as the platforms they are replacing.
One of my favorite examples is Adobe Acrobat Reader. When all it did was display a document, it was fairly hard to hack. But it became popular and Adobe added features, such as the ability to automatically launch links and executable code from within a PDF document. That ended Reader's free security ride. Today, the software is involved in a sizable percentage of end-user exploits. Adobe releases monthly patches closing dozens of newly discovered and exploited vulnerabilities every year.
I don't blame Adobe. Stay static and your competitor will eat you for lunch. End-users don't buy security; they buy features and coolness. If end-users truly cared about security, OpenBSD would rule the planet. It's free and has a demonstrated 15-year history as the most secure (popular) operating system on the planet. It's the OS of choice for hundreds of thousands of users, but in my two-decade career, I've personally met maybe a dozen people who run it.
Meanwhile, as the cloud gains popularity, some cloud vendors are thinking seriously about security. However, clouds are inherently riskier than traditional platforms, all other factors considered equal. First, all clouds rely heavily on virtualization, but virtualization platforms carry every security risk known to physical computers, as well as guest-to-guest and guest-host risks.
On top of that, clouds have unique risks that aren't found elsewhere, including multitenancy (multiple customers sharing the same database), broad authentication and authorization schemes (not just your private directory service), and lack of location specificity. With the last issue, how can you protect your data when even the vendor probably doesn't know where it is specifically?
This is not to say that clouds can't be more secure than traditional networks. Most traditional networks I've assessed could only be improved by moving some of their data into the vendor's tremendously more secure datacentre. But I don't think clouds or thin clients will significantly change the amount of vulnerabilities we face each day.
I used to think Internet crime would one day cause a catastrophic tipping point event, where the Internet, as a whole, went down for a day or so. I figured that the tipping point event, similar to the 9/11 attacks, would wake up the world to the Internet insecurities, and we'd eventually fix them.
What I didn't expect is that we'd live with thefts of our money and identity, as bad as it is, as a normal part of life. I especially didn't think that as each new paradigm comes out - social networking, smartphones, thin clients, cloud computing, and so on - we'd relive the same problems over and over. You'd think that along the way we'd heed the lessons learned and be proactive in preventing the on the new platforms. But we're not there yet.

Opinion: Privacy matters again, so you'd better prepare

After two decades of lingering in near obscurity, privacy issues are finally returning to the computer security big table. This shift comes thanks to high-profile cases concerning mobile devices tracking users, massive data breaches, and countless other instances of data being repurposed in ways users never intended. Companies need to be careful now of how they handle user privacy, lest they come under attack not just from hackers but also the media, the law, and the public.
To recap some of the recent news concerning user privacy: Users, politicians, pundits, and the like were aghast to learn that mobile phones running iOS [1], Android [2], and Windows Mobile [3] have been tracking users or storing user location information. If your smartphone vendor doesn't do it, your app vendor could.
Additionally, there are the recent instances of massive data heists from Epsilon [7] and Sony [8], which likely resulted in tens of millions of individual records being stolen. That's just the tip of the iceberg. Who among us hasn't received multiple "Your records may have been stolen" letters each year for the past few years? Not long ago, I calculated that one of every four Americans in the United States had their personal identity information stolen in one year alone. Exactly how much worse does it have to get before we, as a society, expect better safeguarding of our personal and financial data?
What's more, barely a week goes by that Facebook or Google isn't in the headlines (and being questioned by Congress) for some possible privacy invasion. Give your email address to your favorite newsletter and it'll probably result in a flood of spam from sources you'd rather didn't have your contact info.
Even away from your computer or mobile device, your privacy is in jeopardy. For example. cameras are everywhere. My hometown has many red light cameras, which I'm OK with because they make those intersections safer (usually). Further, they help me uphold my own safe driving tactics.
But it turns out that many of those cameras store the images of every car that enters the intersection, not just law breakers'. Law enforcement can request records based on license plate numbers and often end up with a pretty good idea of the path traveled by a suspect. If you have a wireless toll pass device, you already know your car's every move around a toll highway is being tracked and stored.
But consider this: A GPS manufacturer was found to be selling its customers' location and speed data to law enforcement [9] so that police could set up better speed traps.
More alarming still, in the United States, thousands of government and private data sources -- including those I've mentioned -- can end up in a fusion center, set up by the feds in the name of fighting terrorism. Although this data collection is purportedly all legal and details are kept secret, it doesn't appear to be very American, falling under the area of unwarranted search and seizure (see epic.org [10] for more information).
In short, people are feeling increasingly touchy about how their data and their very privacy is used and abused -- and companies are being taken to task to defend and improve practices that put users' data and privacy at risk. Microsoft (my full-time employer) is even careful to ask if it's all right to identify your Windows Media Player instance to media content providers you contact online before doing so, even if it is only to help people access the content they legally bought. Assume too much, and you could end up in a front page headline, testifying in front of Congress, or being sued.
If your company collects or stores other people's personal data, make sure your company has all its privacy components figured out. The best way to protect someone's privacy is not to collect his or her private information in the first place. The second best approach is to collect it when needed, while it's needed, and then erase it. The third best way is to store it, protect it well, then aggressively get rid of it as soon as possible.
Unfortunately, personal customer information is the lifeblood of many, if not most, companies today. The business model of collecting large amounts of personal information is their primary business model and it's not going away. If your company does this, has it awakened to the new reality? Does it have a CPO (chief privacy officer)? Is your company's privacy policy readily available and linked to every page on its public website? Does your company consider privacy as strongly as it does the rest of its security policies? Privacy needs to be a big, intentional part of any company's security design.
If you're in charge of your company's computer security, you need to ensure that privacy is a big part of that program. If not, tell the leaders a new wind is blowing. It takes only one minor miscommunication, one minor hack, to end up in the headlines, investigated by Congress, and in court.

[]