The University of California at Berkeley has made a name for itself in networking, with innovations such as Unix, Berkeley Internet Domain Name, Smart Dust and SETI@home. But the university has also made headlines over the past few years for things of which it is less proud, namely, a couple of security breaches (a stolen laptop containing personal information on graduates and a compromised database of Californian residents).
At the start of the year, the university published a scathing self-study of its Information Systems and Technology department. It acknowledged the school’s advanced IT network and talented professionals but recommended radical changes to the IT department’s governance and structure.
Clifford Frost, director of Berkeley’s Communications and Network Services (CNS), spoke about ensuring that when people think of the school, they think “innovation,” not “infiltration”.
How has IT evolved at the university?
It’s been haphazard. In the case of the network, it’s been pretty organised. Back in the 1980s, there were campus-wide committees that said networking is going to be important so let’s start building it up now. The campus financial and administrative systems are pretty advanced. But the campus student systems [such as online registration and course catalogs] are less well-funded and organised because there has not been a single high-level sponsor. This is one of key things the campus is open to addressing in the re-organisation.
What is your security plan?
Every networked device has to have its operating system kept up to date with security patches — Windows 95 is not allowed unless you buy a separate firewall device and stick it in front [of Windows 95]. There are microscopes controlled by old operating systems — [the owners] have to put a firewall in front of them. We have software that people can use for free — they don’t have to buy their own firewall or anti-virus software.
Having a policy only goes so far. McAfee’s Foundstone scanner allows us to scan the network continuously for vulnerabilities. [If something is found] we tell [the device owners] to fix it or we turn off their access. Departments can log in and scan their own nets.
How else do you secure the network?
We do intrusion detection at the border of the campus network, and more and more inside the network. We monitor to detect when systems have been broken into, or are being broken into or are about to launch an attack, and we can turn them off. We use McAfee IntruShield Snort, Nessus and Bro Intrusion Detection System. [Intrusion detection] is a big issue because we’ve had some pretty big security breaches on campus. There is a big thrust in getting people to encrypt data on their desktop or laptop.
How do you get ahead of the security challenges?
The latest thing we’re doing is getting people on campus to audit their systems, and the recommendation is to remove [sensitive information]. If they have to have it, we will help them secure their system with tools for encryption. This isn’t just a Berkeley problem, it’s a problem for every single university in the country.
Were these measures enacted after the breaches?
Encryption was accelerated by the breaches. We were working on scanning but, boy, did we start doing it a lot more. The whole campus culture was affected. Everyone on campus is in the business of doing research and sharing information. Everyone on campus knows we don’t want to be sharing social security numbers, but it does create tension between sharing and security. It means educating users. So far, I haven’t experienced resistance to education, but the amount we have to do is pretty staggering.
How are they still able to share information?
The best way is not to have any sensitive information on your system. Why do you need to have people’s social security numbers in records? Because it’s a unique ID, you can take that data and transform it and keep the uniqueness.
What’s your relationship with the Department of Electrical Engineering and Computer Science? Why does it manage its own network?
There was a time when we managed what was called a production network and they managed their research network. For a faculty member this wasn’t a meaningful distinction. If he wanted something done with the network, who should he call — CNS or Fred in research? It didn’t work out. Everything needed the Fred-level of attention. We help them with design and specifications. When we do RFPs for networking gear, we do them together, so they purchase gear on contracts we work out. When they have special research projects that go beyond the bounds of their two buildings, we work with them on that. There’s the Millennium Project [EECS’s managed clustered computing service] for which we set up the network around campus.
What do you learn from the EECS department?
I look to them for long-range developments. There’s a very high-profile project there called PlanetLab [described as “an open platform for developing, deploying and accessing planetary-scale services”; UC Berkeley is a consortium member along with Princeton University and the University of Washington. It’s not just about networking — one of the aspects is: what if we have to address not just computers but cellphones. [Also] how does networking work if we scale the size of things up several orders of magnitude? For example, the department has Smart Dust — tiny sensors that run TinyOS and TinyDB. They scatter this stuff out there — put it in trees, on animals — they’re all networked together and people monitor them. That’s different than [managing] a connection in every office.
The best example [of what we’re learning from EECS] is wireless, which on campus was started as a research project. When we started in 2000, wi-fi wasn’t that common. [The EECS] started building wireless networking into their buildings and they wanted to research how people used it in a wider area. They came to us and said ‘We have a research partner who will sponsor us and help pay for putting a network out there’. The research partner funded 100 wireless cards for students to use. In the first years, students had to agree they were part of the project and that their activity on campus would be tracked anonymously. That went on for a year, a year and a half. At the time, we named the wireless network AirBears and we started installing it as a service outside of the research project.
Now we’re experimenting with mesh networking. In the last year or so, we’ve been using alpha and beta products from Cisco [released in November as Aironet 1500 Series Access Point]. The mesh is a swath through the middle of campus. We’re looking to expand it to where students want it, which turns out mostly to be the restaurants on the south side of campus.
Where is the school headed in high-speed networks?
One view is that the researchers on the UC campuses will need a whole lot more capacity because they will have a lot more data they want to share. One school of thought is that we have high capacity between the campuses and we make it higher so everyone can share. Another school of thought is that these researchers need their own fibre path, from their lab at UCLA, for example, and the lab at Berkeley.
Maybe a piece of the Lambda light that goes across. Right now, the network between us and UCLA — all the higher education institutes in California — is CalREN [California Research and Education Network], provided by CENIC [Corporation for Education Networks Innovations in California]. The capacity between us and UCLA is 2Gbit/s; we already plan to make that 10Gbit/s within two years. Depending on demand it could happen in six months. The EECS has expressed an interest in higher capacity and because of that it might happen in the summer. We’re ready to do it. It’s just a matter of expense.
What are the key priorities for your department?
There are still a couple of dozen buildings where the physical infrastructure for networking is ancient. If somebody wants a 100Mbit/s connection, we can’t give it to them. There aren’t that many disciplines left where networking isn’t critical, so that’s a big issue for us. We’re making steady progress — we’ve got another few years to go.
[Then there is] distributed storage networks, and ... clustered computing and high-performance clusters.
The distributed storage networking would be between the main campus datacentre and College of Chemistry and business school. There are a few computational clusters already around campus for traditional math, physics, astrology, computer science, computational biology and, probably, chemistry. A central IT organisation should be able to build a very powerful computational cluster more efficiently.