Early and extensive deployment of firewalls gave internet users “a false sense of security” and compromised the ideal end-to-end transparency of the internet, says former Internet Engineering Task Force head Brian Carpenter.
The UK-born Carpenter gave the Institution of Engineering and Technology’s annual Prestige lecture this month.
After leaving the IETF last year, Carpenter now lectures at the University of Auckland.
“The firewalls and address translation [used to give private domains their own address space] have taken away that property,” he says. This limits what can be done with the internet and introduces risks to innovation through the loss of transparency, he says.
In the lecture, on the theme “The internet, where did it come from and where is it going?”, Carpenter took his audience through a potted history of the internet and its past, present and future challenges.
“One of the results of firewalls that’s paradoxical is that [their use] slows down the deployment of complete security in the end systems, in your own computer,” Carpenter says.
“If we didn’t have firewalls there’d be no choice, everyone would have to be completely secure in their personal computers, their servers and so on. Because firewalls have been put in place, people believe they’re secure. And they may not be, as it’s well known that email viruses, for example, can get through firewalls pretty easily.
“A lot of corporate network people in particular have not paid enough attention to security in the end system, because of this false sense of security [conveyed by] firewalls,” he says.
With the arrival of the extended addressing scheme, IPv6, the need for address translation will virtually disappear, but it will prove a lot more difficult to cultivate the habit of end-to-end security to allow internet users to dispense with conventional firewalls.
The principle of uncomplicated end-to-end transfer of data, Carpenter says, has something in common with David Isenberg’s advocacy of a “stupid” network, a network concerned with simply passing bits and with all intelligence at the edge.
But the purity of that concept has to be sacrificed on occasion, he says. For example, “if you have two different ways of encoding voice to send it over the internet and you want to build a phone call between the two, you need a magic box somewhere in the middle to convert one to the other, so you need some intelligence in the network for that sort of thing.”
Carpenter agrees with the suggestion of Victoria University’s John Hine that it can be difficult to define the edge of today’s complex networks.
“These principles are not black and white,” he says. They are engineering principles and engineers are allowed to employ approximate definitions.
“The basic principle is still valid. It’s not obvious that you will make money out of putting very complex services very deep in the network.”
The scaling up of wide-area routing mechanisms is another important challenge to be faced in the future evolution of the internet, he says. The switching mechanisms were designed 10 years or more ago for a very much smaller network, he says.