I like to point to software viruses as a test case. With each successive generation we see more and more sophisticated pathogens, as well as more and more sophisticated defenses. Irresponsible engineers are continually creating more and more sophisticated software viruses, and yet our defensive systems, our anti-viral programs and the technological protections we’ve created have been able to keep pace with it. We haven’t eliminated the problem completely. We still worry about it. But, so far, software viruses have been kept to a relative nuisance level, and no one would say, let’s get rid of the internet because software viruses are such a big problem. The benefit we get from the internet and other communication computational resources is far greater than the damage.
I take some measure of comfort from how we’ve dealt with the SARS virus. There’s a virus that was fairly deadly, looks like about 20% die. It spreads pretty easily, through the air and through contact, and the virus can live outside the human body for quite a few hours. It has a lot characteristics of a really bad virus; it spreads easily and is deadly. And through a combination of high tech and low tech means we were able to avoid an epidemic. High tech being the fact that we sequence the virus’ genome in a matter of days, whereas the AIDS virus took years to sequence. So we could develop a test very quickly. And with the internet we were able to communicate and coordinate our activities very quickly. And we combined this with an old tech, the old-fashioned means of quarantine.
So despite what seems like a lot of chaos in the world, through this really unprecedented level of international cooperation, we’ve been able to keep the SARS virus from becoming an epidemic. There was tremendous cooperation, despite the original hesitation by the Chinese to communicate about it. I don’t think we can declare victory yet, but it certainly looks like we’re winning against the SARS virus. It’s very encouraging that we could actually organize like that, very quickly, and suppress a virus that could have spread around the whole world, and become like a flu killing millions of people, because it certainly seemed to have that potential.
I think ultimately nanotechnology will overcome the dangers of biotechnology, and that artificial intelligence ultimately will overcome the problems of nanotechnology. There’s actually no ultimate protection against malevolent artificial intelligence. I think there’s some possibility we’ll destroy ourselves. I would not put the probability well under fifty percent, but I think it is the biggest issue or challenge that human civilization has in the next century. I don’t think we any choice though, but to face the challenge. Some people say, well, even if the risk is one in ten, that’s much too high, and we shouldn’t take that risk. So let’s just stop technology. But I don’t think that’s a viable choice. In order to do that we have to have a Brave New World type of totalitarian government that completely banned what is now totally ubiquitous, the pervasive phenomena of human progress in science and technology.
Some people say, okay, let’s keep the good technologies. Bill Joy actually said this in his Wired cover story. He said, I Bill Joy am not anti-technology. Let’s keep the good technologies. But those dangerous ones that could potentially could destroy, like nanotechnology, let’s just not do those. I pointed out that those destructive technologies are the same technologies that are the helpful ones. The same technology that’s going to save millions of people from cancer and other diseases utilize the exact same tools that a bio-terrorist could use to create a destructive pathogen. The same nanotechnology that’s going to help overcome poverty, clean up the environment, and improve human longevity and health, is the same nanotechnology that can be destructive.
Nanotechnology doesn’t come about because one laboratory is working on it, or ten laboratories. It’s the pervasive end result of hundreds of benign steps. Each step is small and conservative. Texas Instruments creates a higher resolution computer projector. That doesn’t sound like a particularly dangerous technology, but that’s a step towards nanotechnology–because they created millions of little mirrors that are much smaller than we used before. In the process of doing that we’ve learned how to make things smaller. It’s hundreds of steps like that that get us from here to these very powerful technologies. The only way to stop it would be stop all technology, and I don’t even think that would work.
That would just drive it underground. It would actually create a much more dangerous situation, which would be less stable, because the responsible engineers wouldn’t have access to the tools to create the defenses, and the irresponsible practitioners would be experimenting with it in secret. I think that actually would be more dangerous, and it would also prevent us from getting the benefits of these technologies. I don’t think we have any choice but to really confront the problem, and develop these technologies in a way that emphasizes the benefits while we try to control dangers. And I do think we need to increase the priority of dealing with the dangers.
David: What do you think is the biggest threat to the human species?
Ray: We’re not yet on the threshold of these existential dangers from nanotechnology and artificial intelligence. We are on the threshold, though, of profound dangers from biotechnology. I think we should be spending hundreds of billions a year specifically to develop more powerful protective technologies, such as antiviral technologies, which are just beginning to emerge. But we really need to accelerate that, because it’s really race between the destructive possibilities and our defensive technologies. We need to give a little bit faster course to the defensive technologies.
David: Many past predictions of the future evolution of technology have been disappointing. Why do you think that predictions have been so difficult, and do you think that people will ever be able to improve their predictions of how technology will advance in the future?
Ray: I think most people who talk about the future completely ignore the exponential aspect of it. I was at a conference recently that Time magazine held on the 50th anniversary of the discovery of the structure of DNA. So all of us speakers were asked, what we’ll see in the next fifty years. Virtually every single speaker there, except for Bill Joy and myself, used the last fifty years as a guide to the next fifty years. They just intuitively took the amount of change we’ve seen in the last fifty years and applied that to future change, and therefore came up with very timid projections.
James Watson himself said, in fifty years we’ll have drugs you can take that allow you to eat as much as you want and stay slim. And I said, Jim, they’ve actually done that already in mice. They’ve identified the fat-insulin receptor gene, and by suppressing that gene these mice ate lots of food and stayed slim and lived twenty percent longer. They didn’t get heart disease or diabetes. They’ve identified that it’s the same fat-insulin receptor gene in humans as in mice. And there are several promising companies rapidly racing to develop the same drug in humans. They expect to go into human trails within two or three years, so it’s going to be closer to five years than fifty. All of the predictions, except for Bill Joy and myself, were like that. We’re doubling the paradigm shift rate, according to my models, every decade. So if you do the math, you’ll see that there will be about thirty times as much change in the next fifty years as we saw in the saw in last fifty years. And we’ll see a thousand times more change in the next hundred years as we saw in the last hundred years.
It’s the explosive nature of exponential growth, and, in general, scientists do not take that into consideration. They’re struggling with a problem over the past year. They’ve made a certain amount of progress. They’ve solved maybe one percent of a problem. So, based on that, it’s going to take a hundred years. This issue comes up with the dangers of technology. I mean, Bill Joy and I were at a conference, and a Nobel-prize winning biologist said, we’re not going to see self-replicating nanotechnology entities for a hundred years. I said, that’s actually a good estimate of the amount of technical progress to get to that milestone at today’s rate of progress. But because we’re doubling the rate of progress every ten years, we’ll do a hundred years of progress, at today’s rate of progress, in about twenty-five years.
So that failing to see the exponential nature of history, and thinking in this type of linear way, is the reason that technological projections fail to comprehend the really radical changes ahead. The projections that I’ve making–and I’ve been making them now at least a quarter of a century–are based on models I’ve been developing of