How the Internet was made future-proof

The basic design of the Internet is 30 years old. It's a very successful design that now handles more traffic at faster rates than anyone in the 1970's expected. One of the main architects of the Internet was Vinton G. Cerf. He is now a director of ICANN and a senior vice president of technology at MCI Inc. In this interview with Computer Sweden's Anders Lotsson, he explains the design choices that made the Internet future-proof.

Q: Are you surprised that the Internet has worked so well for so long?

A: No, it was designed to be "future proof" in the sense that new telecommunications technologies were thought to be able to carry IP packets and that's the primary protocol on which the rest of the superstructure is based.

Q: What were the fundamental design decisions that made the Internet work so well?

A: Layering of the protocols with clear interfaces and protocol specifications. Open architecture design: Anyone can implement; easy to add new protocols; reference implementations; public specifications of protocols; no proprietary limitations ...

Q: Are there features in the design that were made to accommodate the hardware and the bandwidth of the 1970's and that could be changed now?

A: The 32 bit address space was partly a function of reasonable header size at the data rates and packet sizes available in that period; we need to go to 128 bit IPv6. We did not insist on IPSEC but now I think we should. We did not include a lot of packet header examination on ingress to the network, now I think we should -- for security reasons. We did not make heavy use of PKI but now I think we should -- although quantum cryptography is starting to become available and I haven't fully absorbed all the implications yet.

At higher data rates and longer delays (optical fiber; high speed satellite), we need larger windows for flow control. Some variants of TCP include flow-based congestion and flow control vs window/buffer sizes. This is worth looking into.

Q: The Internet was designed to let the servers make all the decisions and keep the phone network stupid. Is that still the best approach?

A: Generally, yes -- although we should consider ways to allow applications to express class of service requirements and to allow networks to convey performance information back to the higher layers of protocol. A lot of the flexibility of the Internet is a consequence of not allowing application-level information to get too deeply embedded in the network.

Q: Can there be any privacy on the Internet?

A: Yes, one can achieve a fair degree of privacy both during transmission of data and in storage. However, operating systems are a weak spot that need to be much more carefully designed and constructed to resist penetration.

Q: Until the 1990's, the Internet was run by scientists and engineers, and it was based on trust and best effort. That's probably why it succeeded, but now it seems like a vulnerability. Could this be changed?

A: I think there is still an enormous dependence on trust at least among the ISP operators, the DNS operators, etc. and this is still necessary. At the same time, the system is exposed to more malicious behavior than before and has to be re-tooled to resist these incursions. There are more vendors of Internet-related equipment and software and we need to remind these players of the need to establish cooperative relations to assure Internet stability. The ICANN organization has a similar challenge with regard to the Domain Name System, for example.

Q: Do you follow projects for the further development of the Internet, like PlanetLab and Evergrow, and what do you think?

A: I am tracking the former and endorse its objectives. I am not familiar with the latter.

Join the newsletter!

Error: Please check your email address.

More about ICANNMCIQuantum

Show Comments

Market Place

[]