So we’re going to augment our own intelligence. We’ll be able to think faster, think more deeply, develop more profound skills in creating knowledge of everything from music to science, and, basically, become smarter by merging with our technology. One thing to realize is that once non-biological intelligence gets a foothold in our brains, it’ll grow exponentially, because that’s the nature of non-biological intelligence. It doubles every year. That’s what its been doing. It’s actually going faster than that. But there’s a second level of exponential growth. There’s exponential growth, and the rate of exponential growth.
In comparison, our biological intelligence is not growing at all, although biological evolution is still operating. It’s operating on a time-scale that’s so slow as to be effectively zero. So right now, today, we have 1026 (10 to the 26th power) calculations per second in our biological brains, in our six billion humans. Fifty years from now that figure will still be 10 to the 26th power for biological intelligence. Today non-biological intelligence is a good deal less than that by a factor of millions. But since it’s more than doubling every year, as we get to the 2020’s it’ll cross over. And, as we get to the 2030’s, in the bulk of human-machine civilizations, intelligence will be non-biological.
David: Do you think that a computer, or a non-biological form of intelligence, can develop consciousness or a sense of self-awareness? If so, how do you think this might come about?
Ray: People use the term consciousness and self-awareness in a very ambiguous way. If we’re talking about something that we can observe–as in a third party looking at another entity, and asking, “gee, does that entity have a concept of itself?”–that’s an objective observation, and certainly these machines will do that, because they’re going to do everything that humans do. They’ll understand themselves actually better we understand ourselves. I mean, how well do we understand how our intelligence work? We actually don’t understand it very well. These machines will understand themselves very well. They’ll understand how humans work very well. So, in that sense, their self-awareness will be much greater than human self-awareness is today.
But that’s not actually dealing with what David Chalmers calls the “hard question of consciousness”, which is this first-person sense we have of actually experiencing the world in our subjective states. It’s a first-person phenomena, and it’s not a scientific question. There’s no scientific test you can administer to an object where you can have some green light go on that means, okay, this entity’s conscious. Or, no, this one’s not conscious. There’s no consciousness detector you can create that doesn’t have some philosophical assumptions built into it.
I’m aware of my own consciousness, but beyond that I can only make assumptions. I assume that other humans who seem to be conscious are conscious, but that’s an assumption on my part. Other human beings seem to be making a similar assumption. That shared human consensus breaks down as you go outside of human experience. Human beings don’t agree if animals are conscious. Some human beings say that the higher order animals are conscious, that their pets are conscious. Other human beings say, no, animals operate by instinct, which is some kind of primitive machine-like reaction. They’re not really conscious. That’s the whole issue underlying the animal-rights and animal suffering issue. And we’re going to have that controversy with non-biological intelligence.
But right now, when a machine claims to be conscious, or to have feelings, they’re not very convincing. There are machines that do that. If you go to your kid’s video games, they’ll be virtual creatures there that claim to be angry, happy or whatever. They’re not very convincing today because they don’t have all the subtle features, reactions and understanding of language that we associate with really having those feelings. But when we get out to 2030, we’re going to be encountering completely non-biological entities that do have the subtlety of humans, that really are convincing. And it’s going to be a philosophical debate as to whether or not that’s just a really convincing simulation of an entity that is consciousness, or if it’s really a conscious entity. Or if there’s a difference between the two. Now, some people say, since it’s not a scientific question, therefore it’s just an illusion, and we’re wasting our time to talk about it.
My own view disagrees with that. The reason that there’s room for philosophy is because the question is not scientific. Science can’t answer every question, and it’s really the ultimate ontological question. It’s also extremely important for our whole system of morality, and our legal system. Our whole legal system is based on the idea of suffering. If you cause suffering to someone by assaulting them, that’s a serious crime. If you end their consciousness by murdering them, that’s a serious crime. It’s okay to destroy property if it belongs to you, provided society doesn’t have some interest in it, like it’s not a historical house or something. But I can’t go and destroy your property–not because of the suffering I’m causing the property–because of the suffering I’m causing you, as the owner of that property. But it’s all based on there being a conscious entity who has the potential for suffering, and if the entity’s conscious experience is effected by these actions, there’’ll come a point where that question will arise with regard to machines. It’s arising already with regard to animals.
David: If computers do achieve sentience, how do you think corporations and the military might use these conscious computers?
Ray: Certainly the military is today, and will continue to be, a user of cutting-edge technology. This is because the civilization, or society, that’s had the most advanced technology has always been dominant in terms of military and economic power. That’s been true throughout history. In my first book, The Age of Intelligent Machines–which I wrote in 1980’s–I said that the 1990’s would be characterized by machines of increasing intelligence in warfare, and that ultimately the society that had most advanced software, and most intelligent machines in the military, would dominate. And we’ve certainly seen that. In the 1991 Gulf War we saw the first use of intelligent weapons–about 5% of our ammunitions were intelligent. About 95% were smart weapons in this recent Iraq war, and we can certainly see that intelligent weapons completely trump non-intelligent weapons.
The military is already working on the next generation of very tiny weapons–not quite nanotechnology, but something called “smart dust”. These are weapons, literally the size of insects, or even smaller. Millions of these devices can be dropped in enemy territory. They can fly around, do surveillance, and ultimately actually carry out destructive missions. And these weapons will get smaller and smaller, and more and more intelligent. Now, that’s a completely different issue as to whether or not they’re conscious, but using the most advanced technology in general is a key to military success. It always has been, and we’ve certainly seen dramatic examples of the that in recent warfare.
With regards to knowing when intelligent computers are conscious, first of all, it’s not a clear-cut issue. People sometimes talk about that as if it’s something that we can clearly test. It’s not a scientific issue, where it’ll be clear-cut, because as I say, there’s no scientific test that you can just apply to a system, and ascertain that it’s conscious. Now I do believe, and I have predicted, that society will come to accept that non-biological intelligence is conscious. That’s not a philosophical statement, however, that’s a political prediction, and a psychological prediction.
These machines will be intelligent enough to convince us that they’re conscious, and they’ll get mad at us if we don’t believe them. And they’ll have power, because they’ll be very smart, and intelligence really is the ultimate power in the universe. So these machines will be powerful, and they’ll have agendas. One of their agendas will be for us to take them seriously. I believe they’ll win the debate, and again it’s not an us-them phenomena, because we will become them. We’re going to be incorporating non-biological intelligence within ourselves, and become increasingly dominated by non-biological intelligence.
Now, when people hear that they think it’s a negative thing. They think, oh my God, I’m going to become a machine, and then they think of machines that they’ve known. And the machines that they’ve known are much lesser entities than human beings, because all the machines that people have experienced are, in fact, millions of times simpler than human beings. But the machines that I’m talking about are quite different. People haven’t met those machines yet. They’re not lesser than human beings. They will be