FRAMINGHAM (10/06/2003) - Technology has always been a double-edged sword, empowering both our creative and our destructive natures. It has brought us longer and healthier lives, freedom from physical and mental drudgery, and many new creative possibilities. Yet it has also introduced new and salient dangers.
Stalin's tanks and Hitler's trains used technology. And we still live today with sufficient nuclear weapons--not all of which appear to be well accounted for--to end all mammalian life on the planet.
Bioengineering is set to make enormous strides in reversing disease and aging processes. However, the means and knowledge to create unfriendly pathogens more dangerous than nuclear weapons will soon exist in most college bioengineering labs. As technology accelerates toward the full realization of genetic engineering, nanotechnology and, ultimately, robotics (collectively known as GNR), we will see the same intertwined potentials: a feast of creativity resulting from human intelligence expanded manyfold, combined with grave new dangers. We need to devise our strategies now to reap the promise while we manage the peril.
Consider unrestrained nanobot replication. Nanobot technology requires the coordinated operation of billions or trillions of intelligent microscopic devices to be useful. The most cost-effective way to scale up to such levels is through self-replication, essentially the same approach used in the biological world. But in the same way that biological self-replication gone awry results in biological destruction (cancer, for example), a defect in the mechanism that safely controls nanobot self-replication would endanger all physical entities, biological or otherwise.
The threats of nanotechnology don't stop there. We must also worry about control and access. Organizations, governments, extremist groups or just a clever individual could create havoc with this technology. For example, one could put trillions of undetectable nanobots in the water or food supply of an individual or an entire population. These "spy" nanobots could then monitor, influence and even control our thoughts and actions. Existing "good" nanobots could be influenced through software viruses and other hacking techniques. When there is software running in our brains, issues of privacy and security will take on a new urgency.
People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age-old problems; then a sense of dread at a new set of dangers that accompany the new technology; followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the benefits while managing the risks.
The diverse GNR technologies are progressing on many fronts and comprise hundreds of small steps forward, each benign in itself. An examination of the underlying trends, which I have studied for the past quarter century, shows that full-blown GNR is inevitable.
Sun Microsystem's Bill Joy has eloquently described the plagues of centuries past and how new self-replicating technologies--such as mutant bioengineered pathogens and nanobots run amok--may bring back long-forgotten pestilence. It is also the case, which Joy acknowledges, that it has been technological advances, such as antibiotics and improved sanitation, that have freed us from the prevalence of such plagues. Suffering in the world continues and demands our steadfast attention. Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that those same technologies may someday be used for malevolent purposes? Having asked the rhetorical question, I realize that there is a movement to do exactly that, but I think most people would agree that such broad-based relinquishment is not the answer.
I do think that relinquishment at the right level needs to be part of our ethical response to the dangers of 21st-century technologies. One constructive example of that is the proposed ethical guideline by the Foresight Institute, founded by K. Eric Drexler (creator in the 1980s of the conceptual foundations of molecular manufacturing) and Christine Peterson. It states that nanotechnologists agree to forgo the development of physical entities that can self-replicate in a natural environment. Another proposal would create what nanotechnologist Ralph Merkle calls a "broadcast architecture," in which physical entities would have to obtain the codes for self-replication from a centralized secure server, which would guard against undesirable replication.
As responsible technologists, our ethical and professional guidelines should include such fine-grained relinquishment. Other protections will need to include oversight by regulatory bodies, the development of technology-specific "immune" responses, as well as computer-assisted surveillance by law enforcement organizations.
As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new fully nonbiological self-replicating entity that didn't exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the immune system that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage occasionally, the injury is but a small fraction of the benefit we receive from the computers and communications links that harbor them. No one would suggest we do away with computers, local area networks and the Internet because of software viruses.
I would describe our response to software pathogens as effective and successful. Although they remain a concern, the danger persists at only a nuisance level. Keep in mind that this success is in an industry in which there is no regulation and no certification for practitioners. And this industry is enormously productive. One could argue that it has contributed more to our technological and economic progress than any other enterprise in human history.
I hasten to point out that the battle concerning software viruses (and the panoply of other software pathogens) is not over and never will be. We are becoming increasingly reliant on mission-critical software systems (for example, the software running our 911 system, transportation, nuclear power plants, hospitals and many others), and the sophistication and potential destructiveness of self-replicating software weapons will continue to escalate. Nonetheless, we have already managed a number of significant challenges without major damage.
Further dangers from new technologies may appear alarming when considered in the context of today's unprepared world. The reality is that the sophistication and power of our defensive technologies and knowledge will grow along with the dangers. When we have "gray goo" (unrestrained nanobot replication), we will also have "blue goo" ("police" nanobots that combat the "bad" nanobots). The story of the 21st century has not yet been written, so we cannot say with assurance that we will successfully avoid all misuse. But the surest way to prevent the development of the defensive technologies would be to relinquish the pursuit of knowledge in broad areas. We have been able to largely control harmful software virus replication because the requisite knowledge is widely available to responsible practitioners. Attempts to restrict this knowledge would have created a far less stable situation. Responses to new challenges would have been far slower, and it is likely that the balance would have shifted toward the more destructive applications.
Similarly, GNR technologies cannot be stopped, and broad pursuit of relinquishment will only distract us from the vital task in front of us: to rapidly develop ethical and legal standards and defensive technologies that will be essential to our security. This is a race, and there is no alternative.
As we compare the success we have had in controlling engineered software viruses with the coming challenge of controlling engineered biological viruses, we are struck with one noticeable difference. As I noted, the software industry is almost completely unregulated. The same is obviously not true for biotechnology. A bioterrorist does not need to put his "innovations" through the FDA. However, we do require that the scientists developing the defensive technologies follow the existing regulations, which slows down the innovation process at every step. Moreover, under existing regulations and ethical standards, it is impossible to test defenses to bioterrorist agents. There is already extensive discussion to modify these regulations to allow for animal models and simulations in lieu of unfeasible human trials. This will be necessary, but we need to go beyond these steps. We must greatly increase our explicit investment in the defensive technologies. Of all the ways we can combat the abuse of technology, this recommendation is by far the most important. In the biotechnology field, this means the rapid development of generalized antiviral medications. We will not have time to develop specific countermeasures for each new challenge that comes along.
We are at the threshold of the biotechnology challenge. As nanotechnology comes closer, we will need to invest in the development of defensive technologies for that as well, including the creation of a nanotechnology-based immune system. Bill Joy has pointed out that such an immune system would itself be a danger because of the potential of "autoimmune" reactions (that is, the immune system using its powers to attack the world it is supposed to be defending).
However, this is not a compelling reason to avoid its creation. No one would argue that humans would be better off without an immune system because of the possibility of autoimmune diseases. Although the immune system can itself be a danger, humans would not last more than a few weeks (barring extraordinary efforts at isolation) without one. The development of an immune system for nanotechnology will happen even without explicit efforts to create one. We have effectively done this with regard to software viruses. This came about not through a formal grand design project but rather through our incremental responses to each new challenge and by developing heuristic algorithms for early detection. We can expect the same thing will happen as challenges from nanotechnology-based dangers emerge. The point for public policy will be to specifically invest in these defensive technologies.
Broad pursuit of relinquishment will only distract us from the vital task in front of us: to rapidly develop ethical and legal standards and defensive technologies that will be essential to our security. This is a race, and there is no alternative. One of the arguments made for restricting development of GNR technologies is that offense is much easier and less expensive than defense, giving an edge to groups and individuals with bad intentions. A powerful illustration of this point can be seen in the thousands of lives lost and tens of billions of dollars of damage caused by a handful of terrorists armed with box cutters on Sept. 11, 2001. I would agree with that point if the people and resources on the sides of promise and peril were equal. But, clearly, they are not. We have devoted tremendous resources to combating terrorism, compared with vastly smaller resources on the side of destruction. For that reason, we are not witnessing regular repetitions of 9/11. Consider that there are tens of thousands of researchers advancing the "G" technology, whereas the numbers on the destructive side are lower by a factor of thousands. As an example in the nuclear arena, who would have guessed in 1945 that the next half-century would not see a single nuclear weapon (beyond the two dropped on Japan) used in anger? The offsetting factor to the inherent advantage of destructive over defensive technologies is the overwhelming balance of resources devoted to constructive and protective applications compared with malevolent ones.
Although the argument is subtle, I believe that maintaining an open system for incremental scientific and technological progress, in which each step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values. Attempts to control these technologies through highly restrictive regulation or in dark government programs, along with inevitable underground development, would create an unstable environment in which the dangerous applications would likely become dominant.
One profound trend already well under way that will provide greater stability is the movement from centralized technologies to distributed ones and from the real world to the virtual world. Centralized technologies involve an aggregation of resources such as people (cities, buildings), energy (nuclear power plants, liquid natural gas and oil tankers, energy pipelines), transportation (airplanes, trains) and other resources subject to disruption and disaster. They also tend to be inefficient, wasteful and harmful to the environment.
Distributed technologies, on the other hand, tend to be flexible, efficient and relatively benign in their environmental effects. The quintessential distributed technology is the Internet. Despite concerns about viruses, these information-based pathogens are, as I have noted, mere nuisances. The Internet is relatively indestructible. If any hub or channel goes down, the information simply routes around it. The Internet is remarkably resilient, a quality that continues to grow with its exponential growth.
As I mentioned, there are voices arguing for broad-based relinquishment of technology. Bill McKibben, the environmentalist who was one of the first to warn against global warming, takes the position that "environmentalists must now grapple squarely with the idea of a world that has enough wealth and enough technological capability, and should not pursue more." In my view, that position ignores the extensive suffering that remains in the human world, which we will be in a position to alleviate through continued technological progress. Most important, we need to understand that these technologies are advancing on hundreds of fronts, rendering relinquishment completely ineffectual as a strategy. As uncomfortable as it may be, we have no choice but to prepare the defenses.
What He Thinks About: Optical character and speech recognition, music synthesis, reading technology, virtual reality, financial investment, medical simulation and cybernetic art. He's founded and developed nine businesses in these fields.
What He's Written: The Age of Spiritual Machines: When Computers Exceed Human Intelligence (1999) andThe Age of Intelligent Machines (1990).
Where He Is On the Web:www.kurzweiltech.com and www.kurzweilcyberart.com.
Bio Bit: In 2002, he was inducted into the National Inventors Hall of Fame and received the 1999 National Medal of Technology. Kurzweil has made plans with Alcor, the company that currently houses Ted Williams' remains, to have his body cryogenically frozen, stored and then reanimated in the future.