You don’t need a wizard to guess that in 50 years’ time, if you’re still around, computing will be vastly different from the way it is today. Everything, including your appliances, will be connected to the Internet, you may have a brain implant to enhance your thinking, you may even be cloned electronically via the power of the microchip, and one of your accessories is likely to be a pair of eyeglasses that bring additional information via the — you guessed it — World Wide Web. Hello, brave new world and, once again, hello Dolly.
If all this sounds a bit fanciful, don’t blame me — blame the brainiacs of US computing. Many of them got together in San Jose a little over a week ago for a look at how the world will be 50 years hence. Among them were professors galore and a smattering of Nobel Prize winners — and that’s not including the speakers, who were hosted by British television host and scientist James Burke.
As they unravelled their visions of the future, a few obvious points came to the fore.
First, no one can confidently predict the unpredictable — those revolutionary technologies that have given rise to an entirely new way of relating to the world. Number among those the motor car, the light bulb, television and, of course, the World Wide Web. And ask yourself how many of these technologies existed a little over 100 years ago. Now ask yourself who did the best job of predicting their rise — the scientists or science fiction writers. Full marks to H.G. Wells and company.
That conclusion inspires yet another conclusion: Can these feeble machines, with less perceptual ability than a fly, perform up to the expectations of computer boffins? The answer, if one is to accept the thinking of educators and science fiction novelist Bruce Sterling, is no. Sterling made fun of suggestions from Microsoft speaker Nathan Myhrvold and others that computers can replace humans — the problem, he says, is that thinking is too often confused with logic. Thinking and creativity are rather different from the calculating ability of computers.
Within the two camps — the computer scientists versus the educators and creative types such as educators, film-makers and writers — are encompassed the contradictions of forecasting.
Murray Gell-Man, winner of the Nobel Prize for his work on the theory of elementary particles and his discovery of the quark, says the world needs a reward system to enable the rise of competing information processors to interpret and organise the explosion of information and misinformation arising from the Information Age. The ability to see across specialised fields will be essential for the sifting of knowledge and for the development of global unity, he says.
“We human beings seem to be moving gradually and with many heartening setbacks, toward supplementing our local and national feelings with a planetary consciousness that embraces the whole of humanity and also, with some measure, the other organisms with which we humans share the biosphere,” he says.
His words are complemented by those of actor and researcher Brenda Laurel, who describes computers as a “brain in a box, without body, soul, intuition, passion or morality”.
“Yes, we made the computer, but in its role as a cultural symbol the computer also defines us,” she says.
Both Laurel and Bran Ferren, executive vice-president for creative technology and research and development at Walt Disney Imagineering, echo the words of Gell-Man. Ferren talks about the need for storytelling in computing, the need to pass on knowledge.
Laurel conveys part of her message in her own storytelling. She tells a tale about when she was a child, and her mother entered her into an Idaho costume competition, dressing her as an ear of corn.
She arrived at the judging on time, only to find that the other competitors had arrived early, prompting the country store judge to hand out his awards early. In order to avoid the fuss the judge gave her a “prize” off his shelves. This was the Eniac, the first electronic computer, he told the young Laurel. You insert this card with the question at this end and out pops the answer at the other end. That, says Laurel, begs the question: What about the question beyond the bounds of the computer’s knowledge?
Laurel says multimedia artists are using computers to enhance personal story-telling. “Stories are content; storytelling is relationship.” Laurel urges her audience to consider whether the computer should define the question, as in the costume contest, or whether computers should be created to align more closely to the way people think.
Elliot Soloway, an engineering professor at the University of Michigan, urges that all children should be given laptop computers and that their schoolwork should be more closely identified with the communities where they live. In the past, he says, children were programmed at school to enter the industrial age as factory workers. In today’s schools children even sit in rows resembling a factory production line. For the information age, students need a new model to prepare them for the future.
Meanwhile, Myhrvold, Microsoft’s chief technology officer, says he hopes to return in the year 2047 not to talk about software, but as software.
Myhrvold said he expects computers to be at least as smart as people in the next 20 years to 30 years, and to possibly contain scanned images of the brain in the next 50 years.
“I have recently been looking at how long it takes a computer to boot up,” Myhrvold says. “Well, to the best of my knowledge, it takes about 20 years to boot up a human. So why not scan a human into a computer?”
“Humans differ from each other by about 0.25% of genetic data,” Myhrvold jokes. “So all the genetic information that makes you could be stored on 1.2Mb of data. That means
you could fit on a floppy disk.”
Myhrvold also humorously set out his own laws of computing:
• Software is like a gas and it will expand to fit its container.
• Software will never reach industrial maturity.
• The software industry is now and always will be in a state of crisis.
“Nathan’s law governs that software grows until it becomes limited by Moore’s Law.”
And, talking of Moore’s Law, several speakers at the conference, organised by the American Association for Computing Machinery, predict that silicon chips will have reached their limits in the first couple of decades next century. Alternatives, they say, could be to use synthetic DNA or neuron and photon memory. Keep in mind, too, that your wristwatch PC will contain a petabyte of memory.
If that sounds fanciful, you may not wish to heed the words of the so-called father of the Internet, Vinton Cerf, who believes that even the walls of your house will be connected to the Internet in 50 years. You’ll be wearing an Internet earpiece, finger mouse, display glasses and wrist communicator, he suggests. If that thought frightens you, MIT’s Patti Mayes has an even more extreme prognosis of the future. She says some of her students are currently wandering around campus with special headpieces that supply reminders and other information that supplements those abilities where humans are weakest. Take memory — Mayes admits her own memory is not the best. So bring on the computer, except that by 2047 it may not just be wearable technology — you’ll probably have an implant.
If all this techno stuff frightens you, Cerf has one suggestion you may well like: by 2047 your VCR will be programmable via the Web. And the clock in the VCR can be automatically reset after a power failure.
Carver Mead, a professor at the California Institute of Technology, believes it should be possible to mimic the function of the brain’s electrical goo. And Microsoft senior researcher Gordon Bell believes that by 2047 you will no longer need to travel in person to a conference — you’ll participate via cyberspace.
But keep in mind the disruptive technologies, as Hewlett-Packard scientist Joel Birnbaum calls them — those technologies you simply can’t predict. That’s the realm of science fiction writers.