Security professionals who can attend just one conference a year should make it Black Hat in Las Vegas. It attracts extremely talented information security professionals, and the informative, relevant and timely classes and briefings are mostly conducted by industry experts.
An extra benefit is that the annual DefCon computer hacker conference is always held right afterwards. If you don’t get enough security training at Black Hat, stick around and DefCon will be sure to fill your brain to the brim.
Black Hat is fun. When I go, as I did earlier this month, I usually bump into folks I know, whether I worked with them in the past or have just been exposed to them in some fashion. It’s always great to share war stories and reminisce about how things were done in the old days.
You can also find yourself having informative discussions with complete strangers. At this year’s Black Hat, I spent most of my time attending the Microsoft Vista sessions, as I’m interested in learning about the security features of Microsoft’s new operating system. The discussion on Microsoft’s kernel-hardening was interesting, but at one point, I got lost in the technology-speak, not being a kernel-level programmer by trade.
So when my coffee cup went empty, I left the room and ended up in a very interesting hallway conversation with a couple of information security professionals who were talking about Asterisk. That’s an open-source telephone private branch exchange system that can be easily installed on Linux.
The PBX talk led to another interesting discovery for me. One of the guys was talking about CallerID on a PBX and mentioned that if you configure a phone’s CallerID to display the name of the owner of a cellphone number and then call that cellphone from the phone with CallerID, you will probably be given access to the cellphone’s voice mail. Of course, this won’t happen if the cellphone’s voicemail has a password, but how many people configure passwords for their cellphones? Until then, I hadn’t (shame on me). Needless to say, the next time I sent out a standard “Don’t open attachments from strangers” message to our employees, I included that perhaps trivial piece of information (Ed — sounds a lot like the Telecom mobile phone voicemail incident that happened here in New Zealand in May last year).
Back to the briefings. I found a session on incident response presented by the CEO of Mandiant to be a validation of my own incident-response procedures. Mandiant is a professional services company that has a lot of experience with incident response and forensics, and I gained some knowledge about a few tools in its First Response programme. Ironically, it was incident response that had my attention the very day I returned to work from Black Hat.
A succession of people had called our helpdesk claiming that their machines had arbitrarily rebooted and we thought malicious code was propagating through our network. We needed to do a forensic examination on one of the victimised laptops, but many of the people calling were in remote offices or even in another country. Luckily, a machine from one of our project managers, whose office is just a few doors down from mine, was affected, so I looked at her laptop.
Our normal procedure is to take an image of a compromised laptop so that we can conduct the forensic exam on the mirror image and let the owner of the laptop continue to work. But the calls to the helpdesk were rising and we didn’t have time to take an image. We had to get right to work tracking down the source of the problem.
We used some tools to view things like running processes, event logs, open ports, services and scheduled tasks. Nothing showed up that looked malicious. We installed and ran a couple of different virus-detection tools. Again, we saw no signs of anything malicious. We even connected the laptop to a hub and used Ethereal, a free network-protocol analyser (sniffer) to monitor the network traffic generated from the laptop. It all looked legitimate.
Then, as we were getting ready to roll our sleeves up to conduct some advanced forensics, one of the desktop engineers discovered that a Microsoft Systems Management Server (SMS) push had been executed the night before. We use SMS to push patches and selected software to our 10,000 desktops. Usually, we schedule such activities in advance and tell users and the helpdesk about the pending push. Even emergency pushes still involve a fair amount of communication. This time, unfortunately, we’d a communications breakdown.
The systems engineer responsible for pushing the security patch was under the impression that the proper coordination and communication had occurred, and that he had permission to execute the SMS push. But a check of our remedy ticketing system indicated no planned or emergency change control for the SMS push, and no communications had been sent out.
Fortunately, since most of our users are configured for automatic updates, only those users who didn’t have the required patch (which necessitated a reboot) were affected.
But while this was a bit of a false alarm, we ended up with a good exercise in incident response that validated some of our procedures.
One thing I’ve learned over the years is that every security incident is different. But whether an incident is a real security breach, malicious code propagation or some quirky network issue, a solid incident-response process should let you sort it all out efficiently.
This time, I’ve learned to add the SMS check to our response protocol so that next time we won’t be running around like chickens with our heads cut off for no reason.