David Jay Brown
Hans Moravec is the founder of the Mobile Robot Laboratory of Carnegie Mellon University, and directs the world’s largest robotics program. He received his Ph.D. from Stanford University, and is the author of Mind Children and Robot, two of the most mind-stretching books I’ve ever encountered. He predicts that by the middle of the 21st century extremely powerful robots will be built with super-human intelligence. He has also suggested that one day we may be able to transplant our brains into powerful robot bodies, and transfer the contents of our minds into extremely sophisticated computers.
Moravec envisions robot physicians in the future that will be able to repair virtually any type of damage to the human body. These “fractal branching, ultra-dexterous bush robots” would be composed of “a branched hierarchy of articulated limbs, starting from a macroscopically large trunk through successively smaller and more numerous branches, ultimately to microscopic twigs and nanoscale fingers.” Moravec suggests that “even the most complicated procedures could be completed by a trillion-fingered robot, able, if necessary, to simultaneously work on almost every cell of a human body.”
I spoke with Dr. Moravec on March 13, 1999. The interview lasted over two hours, and was a great deal of fun for me. He spoke about the current state of robotics, the nature of consciousness, how robots might evolve in the next century, and life-after-death. Moravec possesses that rare whole-brain synergy that comes when technical expertise is coupled with boundless imagination. He seems to genuinely love speculating about consciousness and robotics, and he laughs a lot.
David: How did you get interested in robotics?
Hans: That’s life long. These days I’ve been telling people the story of when I was four years old, and my father helped me build a dancing man. I had this mechanical construction kit, made of hard wood, pegs and pulley wheels. And there was a device that especially caught my attention. You turned the crank, and a central wheel inside of a box turned another wheel at right angles. That moved up and down, and turned round and round. And a peg went up to the top of the box to a man made of blocks. There was a body and a head, with arms and legs that swung, and as you turned the crank the man danced. The vivid impression was that we had something that was alive, or almost alive, made up of totally inanimate parts. (laughter) And I have been pursuing that ever since.
David: What was your inspiration for writing Robot and Mind Children?
Hans: That related back to thinking that I had done in high school. I was arguing with a friend who liked to take contrary positions, just to get things livened up. After we’d been talking about robots for quite a long time, he suddenly said, “Well, I don’t think a robot can think. It’s just mechanical parts, and it behaves mechanically.” A good arguing position, one that many people take all the time. I thought real hard how to counter this, and came up with an idea. You could start with a human being, and replace the parts of the human being one-by-one with functionally equivalent parts, but strictly artificial parts.
I think at the time I said, you could replace neurons with transistor circuits. And if the parts were truly functionally equivalent, what you ended up with, after replacing the entire human being bit by bit that way, would be a thing that still behaved like a human being, and had whatever properties the original human being had–at least in terms of interaction, and presumably the thought behind the interaction. There’s no point at which that should have gone away. So then I said, well, do that again, but this time don’t start with the human being. Just put the parts together in that same exact order, from the ground up, and then you have strictly a robot.
So in the first place you have a human being that just has a lot of prosthetics, and in the second case you have a robot built from scratch, but with the same properties. So it was kind of an argument, just to to counter that position. But I thought about that scenario, and many other robot scenarios.
When I got to Stanford in the early Seventies, I actually came in on the tail end of a discussion that had been going on there, based on a newsletter that a few people had received, which talked about the possibility of replacing the brain parts of a person with mechanical equivalents, so that you could get around a lot of the mortality of the biology, and that revived thinking for me. Pretty shortly thereafter I started writing some essays–not just about replacing brain parts with their mechanical equivalent, that was only icing.
I had to write an essay for a qualifying exam. At the time I already started some arguments with my advisor and other people, but especially with my advisor, who’s position was that the amount of computation we already had–those computers could do about a million instructions a second–was more than adequate to get full human intelligence, if only we had clever enough programs.
This is maybe a reasonable position for somebody who was worried about the reasoning part of intelligence, because over the previous decade some pretty successful programs had been written that could solve algebra and calculus problems, do integrations, and prove theorems in pure logic or geometry. They could do intelligence test problems, could play games pretty well–not super well in most cases, but just about as well as college freshmen. And it looked like a number of techniques had been found that greatly speeded up such programs.
For instance, in game playing there was this thing called the Alpha-Beta procedure, which pruned down the game tree to approximately its square root. There was a lot of hope that there would be more tricks like this that could still be found, if we were just clever enough or worked hard enough at it. But I’ve been doing robotics, in particular computer vision for a robot, and my advisor didn’t work at all in that area. I had trouble with the idea that one million calculations a second would be enough for human level intelligence. Just processing a picture you start out with a picture that consists of basically a million numbers describing the grey levels in the scene.
And to do the simplest thing with that picture, for instance to find a contrasting area in it, you had to scan the entire picture. Since there are a million numbers you have to do quite a few million calculations, to do anything resembling human vision. Actually you have to do much much more than that. So you’re talking about millions of calculations just to process a mire glimpse of the world. But with a million-instruction-per-second computer that takes many seconds or minutes, or in fact in a lot of our programs. It was taking hours to process a single picture, but human vision works at the rate of about ten frames a second, that’s about the rate at which you can follow motion. There must be vastly more computation going on inside a person than this one-million-instruction-per-second thing.
So I marshalled together more arguments in that line, and wrote an essay that initially was titled “The Role of Raw Power in Intelligence”, arguing that we needed about a million times as much computation as we had to do what the nervous system did. Probably a lot of the thinking we did involved visual processing. I’m a visual thinker myself, so this was a natural for me. This doesn’t involve just chasing down logical inferences, but involves visualizing the problem. A lot of the power in our thinking comes from mentally mapping problems into things like visual or perceptual metaphors.
I think Einstein actually felt he sometimes wrestled with his formulas. This means the formulas had mental arms and back (laughter). If that processing in our heads is equivalent of about a million times as much computation as we had, that probably would be at least a partial explanation for why some things in artificial intelligence were proving so intractable. We were just vastly underpowered.
I wrote this essay and then I extended it with some more scenarios, including the prospect of downloading, converting, or basically transferring a human consciousness into a machine–just as one of many scenarios, some of which started with a human being and some of which didn’t. I got some response to that, by handing it out, and published it in Analog Science Fiction magazine, in the late Seventies.
In the Eighties I was writing more derivations and expansions of these ideas, in essays and articles for a few chapters in books and things like that. But I was feeling unsatisfied that the totality of the ideas wasn’t really being expressed properly in these short pieces. So in the early Eighties I decided I really need to write a book. I guess in the Seventies I’d already decided to do it, but hadn’t really got started.
Another spur to this was that in the Seventies I had read lots of books. One of them was Carl Sagan’s Dragons of Eden, which was a bestseller. There were a lot of things about it that I liked, but also a lot of things I thought were too short-sighted or too conservative, such as paths that he should have extended and didn’t. Or positions that he took that I thought were just not courageous, about the nature of intelligence beyond human intelligence.
David: Or just not that imaginative.
Hans: Yes, right. Robots, of course, were not primarily on his mind. He was thinking of extraterrestrial intelligence, but also assuming that it would be biological, which annoyed me (laughter). So I wanted to make a case for these other ideas. In 1985 I decided that if I didn’t get started soon this would just go on, I would just keep on wishing I would have done this forever.
So I started assembling all the essays I had already written, and organizing a book a little bit without having a publisher or anything. Then, coincidentally that year, a letter arrived from an editor at Harvard University Press inviting me to write a book, based on some of the essays he’s seen in various places. So I wrote him back saying your timing’s excellent. Then I started writing and seriously working on the Mind Children book.
David: One of the things that fascinated me about your books was the philosophical speculation about consciousness. Do you think that we will ever have a true scientific measurement of consciousness?
Hans: The position I take in the new book is that consciousness is not really an objective property–though ultimately I decide nothing is an objective property. Existence itself is subjective. But before you get to that stage, you can look at an entity like a robot that maybe exhibits behavior that could be interpreted as conscious. But you can look at it in a strictly mechanical way if you want, especially if you understand enough of the details of the internal mechanism. You might be able to fully explain its behavior in purely mechanical terms, such as cause A produces effect B, which produces internal cause C, which produces D, and so on. Just a chain of simple causes and effects, and that explains everything about the robot.
So some people look at machines and say that’s all they are, and therefore they can’t be conscious, because they’re just mechanical. But I answer that you can also look at a human being that way if you understood them well enough. Neural signal A causes electrochemical events B, and so on, and just a chain that way. I even imagine that some day there will be entities able to process vastly more data than we can, and they could be intelligent enough to look at us exactly that way, as if we were just these clockwork mechanisms. They could interact with us on that basis without ever forming an interpretation about our thoughts or feelings or so on. We’re just causes and effects.
Interacting with such a thing would be interesting, because, probably, most of the time, it wouldn’t be that different from interacting with a person. If that super-smart entity wanted you to do something, he would calculate what mechanical things they should do to you to cause you to do that thing. But probably the easiest mechanical things they could do to you is make certain certain sounds at you, such as please pick up that thing. Then calculating the effect in the long run they would probably also say thank you afterwards (laughter). In their minds that would just be a string of craftily constructed sounds, right? That wouldn’t have the psychological implications that we put on it, but yet it doesn’t really matter does it? Their interaction with us would be effective.
The only reason it would be strange in because sometimes they would be able to figure out something to do to us that’s not the usual kind of interaction that our psychological models would suggest. So maybe there’s a certain song they could sing, having nothing to do with what, as far as we’re concerned, with what we ended up doing in response to the song, right? It might seem like some kind of subtle mind control because that’s another path that isn’t contained in our psychological models of each other.
So, in the same way that you can look at a human being either psychologically, as we’re able to, or mechanically, as we’re not quite up to yet, you should also be able to look at a robot in various ways–including the mechanical, as the engineer that built the robot probably would, or in the psychological way, as probably most people interacting with the robot on a casual day-to-day basis would. If the robot says, “My energy stores are low, and I’m really feeling down today. My servos on my right side are not functioning correctly, and I’m just not feeling well.” Then, ultimately, it would take a very hard heart basically not to sympathize a little bit–especially if it does this consistently, and also asks you about your feelings (laughter), and responds to the answers you give in the appropriate ways.
I think that’s completely reasonable. You can map psychological properties onto the behaviors of the robot regardless of the mechanism that causes them, because, really, who cares what the mechanism is? There is a mapping from the psychology to what it does, that makes complete sense, and is for us undoubtedly the most effective way of interacting with a machine. So I’m saying the psychological properties are not really an objective thing. They’re a way of looking at something. Once you are open to that you realize that you can actually look that way at lots of things, if you wish to. Sometimes its not the most effective way.
Basically we have mental mechanisms for dealing with things in the world. We have one set of mechanisms for dealing with inanimate objects. They tell us how to pick up sticks, throw stones, put things together, make houses, and so on. But we have another set of mental mechanisms for dealing with the other people in our tribe. There we worry about whether they like us or don’t like us. Or how we feel about them, and whether they’re in pain. We feel for them when things like that happen. Or maybe we’re angry at them, and we enjoy it when they get hurt, and so on.
Those are a different set of tools, and usually we keep them kind of separate. We’re actually upset when somebody inappropriately uses the mechanical interpretations on us. One of the things you can do under the mechanical interpretations, and it’s perfectly all right, is to hurt inanimate things, to break them, whereas when you do that to living things there are more serious consequences–because they might fight back, or their relatives might come and get you, or whatever.
So in day-to-day life it’s often dangerous. In fact, if, because of some mental-brain defect, somebody tends to treat other human beings in this mechanical way, we usually call that psychosis. Those type of people can be dangerous because they have no feelings. So I think some of the natural defences against such things in some people get called into play when people talk about building robots, and then interpret them in human ways. With a little bit of corollary you might be able to interpret humans in mechanical ways, and that could be a dangerous thing in society. We have instincts for that, because there are ways in which that could be done where it is indeed dangerous, things that have presumably come up regularly in our evolutionary history. So we have instincts that tend to make us defend ourselves against that kind of thing.
David: How do you think the internal experience of consciousness is created?
Hans: Oh, there’s another aspect to the interpretation of consciousness, of basically something having a mind, namely that it has beliefs and feelings. Those are also attributions. But one of the other things that you attribute to it is the ability to make such attributions. When you look at something in a psychological way, and it’s something that you interpret as complex as a human being, then it can look at other things and basically project psychological properties on to them. That’s part and parcel of the interpretation, that it’s able to make those kinds of interpretations, and it’s able to make those kind of interpretations about itself. Within the abstract interpretation of psychological properties is the ability to make abstract interpretations of psychological properties, and also of itself.
So you have this cycle where the being is itself believing itself to be conscious, and believing itself to have feelings, and feeling itself to have feelings. All right, so I think that’s what it is, it’s a way of looking at the world and a way of looking at ourselves, which includes, of course, looking at ourselves that we have the ability to look at things. So it’s no more real and no less real than that, and you can have that in a program. In one way of looking at it, you can sort of prime the pump, in that you build a machine from the ground up, and it’s all mechanical at first, because that’s all you’d built.
You just build mechanisms that effect other mechanisms and act in a certain way. But you’ve built it in such a way that it’s easy to make the interpretation that certain parts of this mechanism represent beliefs. Like there’s a string of memory cells here, and you interpret them in whatever language is being used to store things as meaning something. Some of the meanings are “I believe this. I believe that A is B.” Then other memory cells represent feelings. So if some number is zero I feel good, and if some number is large I feel bad. These are most natural if you have a framework in which there are certain kinds of actions that result from the states of the beliefs being such and such.
I describe that in chapter 4 of the new book; how to build up the right structure through a series of layers. The first generation universal robot has basic functionality. The second generation has a conditioning system which causes certain events to reinforce behaviors that it did, and other kinds of events to prevent behaviors in future that did in past. So you basically have a thing that you can interpret as desirable and undesirable, which shows up in a very clear way in the behavior. Then you have a third layer in which there is a simulation of the world, and you can look at the elements of the simulation as beliefs about the world. Then there’s a forth layer in which those beliefs are made even more explicit in as a propositions, as things that are used to reason about.
David: So you think consciousness occurs in the stage where the robot begins modelling the world?
Hans: Well, the third generation is the stage in which you can interact with the robot in such a way that it can actually describe how it feels, because it plays scenarios in its head. The scenarios produce conditioning effects from their second generation conditioning system. If the appropriate words are attached in the obvious way to negative and positive conditioning, it can already tell you that it likes this and dislikes that. If the third generation of world models also includes psychological descriptions of actors in the world, then I think the model that the third generation robot builds actually has three kinds of information about the world.
One is strictly physical. For instance, the robot can model if it drops something that it would fall, and if it spilled water that it would spread, and so on. Then there would be cultural information, which is the meaning and the use of various things in the world, so you don’t use the fine china to empty the toilet and so on. Then there’s the psychological description of the world, for things that are actors, primarily human beings and probably other robots of its kind, which is a short hand way of describing how they behave, because the full description at the mechanical level just is much too complex.
So the robot can no more have a neural description of a human being than you or I could. But it could have a description which says John likes tea, and likes to sleep, and does not like red furniture, and so on. Also for psychological states the robot could infer things like John is happy right now, or John is angry. And those same same psychological models could be applied, and probably tuned a little bit, to other robots, and even to the robot itself. So it could examine its own behavior using these psychological models and say, I don’t like to fall down the stairs, or I like to please my owner, because that exactly summarizes the behavior that it has.
This means that you could have a conversation with it about what it likes, what it doesn’t like, what you like, and what you don’t like. It could also relate to you events in its past that illustrate these states of mind. I think it would be no trick at all to begin to empathize with it, and to say, well, this is actually an interesting person, and clearly conscious. It would take great mental effort to keep reminding yourself, well, this is not really consciousness. This is just the operation of this program behind it. In fact, it would make your interaction with the robot much less effective if you kept interrupting yourself with that kind of irrelevancy.
Now, the third generation robot has only a very literal kind of knowledge about the world. Everything that it thinks about is in terms of concrete objects, specific cuts, specific tables, specific kinds of motion, and so on. So it’d be a little simple minded when you were speaking with it. You couldn’t talk to it about large generalities.
The fourth generation robot adds real intelligence to that by having a layer in which things extracted from the simulator can be abstracted and reasoned about. Interesting interaction though between the reasoning system and the hard simulation, which is that sometimes when you go through an abstract reasoning, the main way you make the reasoning abstract is by leaving out certain details, and just using other aspects of the world situation and deriving results from there on the basis of rules of inference.
But sometimes you leave out the critical things, and it’s not obvious at first that what you left out was important. So the intermediate results of chains of reasoning can brought back and instantiated in the simulation of the world, to see if they actually make sense there. So if some chain of reasoning has lead the robot to believe that you could support, let’s say, a glass by standing it on a broom, then trying that in a simulation would show that in fact that doesn’t work. The thing will always fall down. Each instance that’s tried in the simulation wouldn’t work, so the robot could then just disregard that particular chain of inference, and save itself a lot of effort coming up with more derivations from it, that would be similarly nonsense.
There has been programs in the past, in the Sixties, that were able to do things like that in much simpler domains. One of the best is the geometry theorem-proving program of Herbert Glanter. He wrote a program that did formal inference, going from Euclid’s propositions and proving theorems. But as it did each step in such a proof, it also in parallel drew the equivalent of a diagram, actually using analytic geometry, representing points as two numbers, with X Y coordinates, and lines as pairs of such points. Then testing whether two lines intersected by doing the appropriate kind of arithmetic with the coordinates of the end points, and seeing these two lines with same length by calculating the sum of the squares of the differences in X and Y.
Within the numerical accuracy checking, if things that it was trying to prove, such as that line A equals line B in length, were actually true in the particular instances that it was drawing, within, and the drawings were all approximate that the numbers were not done to infinite precision. So if two lines were the same they had to be the same within six decimal places or whatever. But if they were then it was still plausible to continue the proof, trying to prove that they were the same.
But if they were not the same in the diagram, then there’s obviously no point in going on with that line, and that was extremely important. That’s what made the Galearnger’s so good was that it was able to prune the vast majority of logical directions, because they didn’t actually work in the specific examples. So obviously they were not true in general. So the forth generation robot, I think, will work that way too, only reasoning, but much more complex things, like maybe the physical world around it.
David: The idea that fascinated me the most in Mind Children, was what you said was the inspiration for writing it. You discuss the possibility that, gradually, section-by-section, we may be able to down-load our personality, memories, and sense of self into a superior electronic computer.
Hans: Yeah, a lot of people like that. I’m actually a little off on that myself, in that it sort of strikes me now as building a car by starting with an ox cart. And the ox cart is us, the old design, back to the stone age. Then replacing the wheels with rubber tires, the ox with a motor, and the sideboards with sheet metal panels. When you’re all done, you have something better than ox cart, but you still don’t have a very good car. If you were to sit down instead with a fresh drafting board, or on a design screen, and using the best engineering knowledge you had available, design a car, from the ground up, then you could build a much better car than replacing the ox cart.
David: It wouldn’t be a better design, but the idea is that we could transfer ourselves into it.
Hans: I think of that as kind of a frivolous thing to do. I mean, we’ll do it probably, but it will be like a tourist thing. It’ll be like a love boat cruise compared to real exploration. I think we’ll do it for amusement, but it won’t have a serious impact on the future.
David: Another thought-provoking idea that you discuss in Mind Children is the possibility of completely scanning every aspect of someone’s brain and body, and nanotechnologically composing an identical copy of that person. How do you think the original person and the copy would interact?
Hans: I don’t think there’s any problem there. Exactly what would happen is what you think would happen. There would be two of you that would both think they’re you. There’s no problem there. That’s just the way that it would be, and you can imagine the same kind of scenario with other similar technologies, like the Star Trek transporter. What if you had two receivers? I see no reason why that’s not possible, and you’d a very identical twin initially.
David: But we know from identical twins studies that the twins usually have very different personality types.
Hans: Well, the thing is that they were possibly identical when the ovum first split, but after that they had different histories. This copy that we’re talking about would have the same history up until the point that the duplication process happened, and only then would they begin to diverge. So initially they’d be extremely similar, extremely identical, and it requires you to rethink or readjust your intuitions about what identity means. But that’s all. I think the problems with your intuition is not with the scenario in any way.
David: What do you think happens to human consciousness after the death of the body?
Hans: In chapter 7 of Robot I develop some of what seem to be further consequences of my way of looking at consciousness. Basically I assume that a good simulation can be conscious just like we are. In fact, in some ways I look at ourselves as just a kind of simulation. We’re a conscious being simulated on a bunch of neural hardware, and the conscious being is only found in an interpretation of things that go on in the neural hardware. It’s not the actual chemical signals that are squirting around, it’s a certain high level interpretation of an aggregate of those signals, the only thing that makes consciousness different from other interpretations, like the value of a dollar bill.
It’s not intrinsic in the dollar bill. It’s an interpretation, an attribution that you make on to that. And that works because a lot of people make it so you’re able to exchange the dollar bill as if it actually had any value. But there’s nothing intrinsic in the twenty dollar bill that makes it worth twenty times as much as the one dollar bill. In some other society it could just be the other way around. They might treat the that pattern that’s on the one dollar bill as being worth twenty times as much as the pattern that’s on the twenty dollar bill. It’s an external attribution.
And beauty, to give you another example, is in the eye of the beholder. The aliens from Regal 4 might not find the Venus De Milo quite as beautiful as you do. (laughter) They requisite sixteen tentacles.
David: Oh, so think of how repulsive she is with those missing arms?
Hans: Right. Actually, what could be more horrible than that? (laughter) So I think consciousness is the same kind of thing. It’s an attribution that we make on to–not so much the mechanism itself, because we didn’t even know about those neurons until very recently–the behavior that we interact with. The only thing that’s tricky though, that somewhat makes consciousness different, is that it includes within that interpretation the ability to make interpretations. So the conscious being is able to interpret itself as conscious. It doesn’t need people outside saying you’re conscious. It can say to itself, I’m conscious. Of course, that’s only meaningful under the right interpretation. (laughter)
If you look at that person saying I’m conscious, but you look at them in a strictly mechanical way, they’re just making meaningless noises–a mechanism that’s built to make noises like that. So you have this rather abstract property, and it really is an abstract property of consciousness. It’s not the physical thing itself where the consciousness resides. It’s in the abstract interpretation, which, in the case of consciousness, is self-closing. It is being made up by itself, as well as, presumably, by other beings.
I see no reason why you couldn’t do exactly the same thing for a robot, or for an abstract simulation. So you have a person who’s really just a simulation inside of a computer, but they interpret themselves as having thoughts, feelings, beliefs, and they feel themselves to be real and to experience their existence.
Now, if it’s a simulated human being, then they wouldn’t be very happy probably unless they also had a simulated body to go with it, so that they could feel their extremities, and sense things. Of course, in order to sense things, there has to be something to sense, and you also want a simulated world for them to live in. This whole scenario makes my point. So now if it’s all done in one computer, you have a simulation of a person’s mind, a person’s body, and of a world for that body to live in. The whole computer can live inside of a featureless box.
A computer engineer who encounters this box without any special knowledge of how it got to be programmed the way it is, wouldn’t really see anything special there. He would look inside, perhaps, and see the program counter counting the memory locations that are happening, and where instructions are coming from. He would look at various portions of memory, and the numbers would be changing, just like they do in any program.
David: Meanwhile, a whole lifetime of adventure is going on inside.
Hans: Right. There would be nothing notable. There would be no appreciation that there’s a person in there suffering, or enjoying life immensely, having daily experiences of deep significance. Only those people with the interpretation, perhaps, that the original programmer had, might be able to see that person in there. Of course, that person in there experiences their own existence regardless of what people outside are seeing or not seeing.
Now, to make this whole thing more explicit, imagine that there’s a second computer which is able to interface with the first computer, through a network or something. The second computer has in it the means to take numbers from the first computer, and interpret them in the way that the original programmer meant, so that it’s able to produce a picture of the world inside the simulation. You can see the person living their life, and experiencing things. You can hear them speak, possibly, and you could even listen in on their thoughts. So if you attach this device to the first box, and look at the screen, there’s the interpretation for you. So there’s no doubt there’s interesting things happening in there, and that there’s really a person in there.
Now, imagine that you could change the representation for a simulation. The next step in the reasoning is just to pick a simpler example. Let’s say in a simulation of fluid flow, you could have certain memory cells represent the pressure, the momentum, and the temperature of little bits of fluid. You have the way it’s usually done, but there’s other ways of doing it too. You could have variables instead represent the intensity and the phase of pressure winds throughout the whole liquid. If you have enough numbers representing all the possible pressure waves, then that’s all you need. That can fully represent the fluid also. You don’t need the original numbers that represented the localized pressures and temperatures.
So you can convert a simulation from the space domain into the frequency domain, and in doing so you’d utterly change the kinds of numbers that are being stored in the memory. You utterly change the way the program that changes those numbers looks. But, if you were clever, you still have a way of interpreting the result so that it looks just like the interpretation that you had of the original formulation.
So imagine that it’s possible to do a mathematical transformation of the numbers changing in that first box containing the person to some entirely different set of numbers. But then you make the analogous mathematical transformation in the viewing box, with which you use to peek into that first box. Then the person’s still in there, still undergoing their life, even though what the computer’s actually doing is utterly different. Now there’s no limit to what kinds of changes you can make to that first box, and still retain your image of the person. One of the most general ways of showing that is to imagine that the interpretation box is made of a big look-up table.
A look-up table just says, if the first box has in all of its memory cells the following huge number, then it means this. If it has this other big huge number, then it means that. And so on. There’s a huge table. It as an astronomical number of entries in it. So for every possible state of the memory cells in the first box there’s a meaning, which ultimately translates into some picture on the screen with sounds and so on. so By putting in the appropriate look-up table in the interpretation box you can transform the simulation into anything. One extreme thing that you can transform it into is a simple counter–that just counts one, two, three, four, five, six.
But then in the interpretation box you have this giant look-up table that says, one means the person is sitting down right now and they’re sort of tired. Two means they’re just starting to get up, and three means they’re scratching their head and saying ouch, or whatever. I’m still claiming that you haven’t lost the essence of the person, and that the person inside the box is still feeling real feelings, just like they always did. In fact, they’re completely and utterly oblivious to the changes you’re making in the representation, even when you go to the extreme of turning them into just a counter, because they really don’t exist in the box at all. They exist in the interpretation, and the interpretation is not something that is in that external box. It’s an abstract thing.
It’s a mapping that anybody could have. Somebody from another planet could come up with another interpretation box with the right table in it, and see that same person. It’s not something that you create just by peaking, because anybody else can peak and see it, if they they just looked at the thing in the right way. So this leads me to the position that it isn’t the viewers who are creating this person. The person exists independently. Inside of the box he’s completely oblivious of these viewers. They’re just living out the logic of the simulation, and they don’t care if there are any viewers, if there’s lots of viewers, or if the viewers are making mistakes. It doesn’t change anything for them. Their existence is entirely tied up in the logic of the interpretation, regardless of who’s doing the interpreting–even if nobody is.
So I can’t but conclude from this way of thinking that existence is a Platonic thing. It’s not the simulation that created this person. The person existed within the logic of what was being simulated, and the simulation is just basically a way of peeking at them. But they already existed, as the logic of their existence is self-contained. Now, this has even further implications. Let’s say you have a viewing box that looks at this simulation (whose importance I’ve greatly reduced now) and somebody else sees that particular person in there, but that person may have a viewing box with an entirely different look-up table, and be able to see something completely different inside of the simulation that you’ve got.
There are many possible interpretations. In fact, there’s an infinity of them. There are interpretations for all possible look-up tables of arbitrary events. So in this counter you can see any possible world. And any world that has an observer who’s aware of their own existence in it, exists for that observer regardless of whether or not somebody actually has that viewing box or not. So all possible worlds exist, period. In a Platonic sense, that’s really interesting, but there’s no reason to add an extra hypothesis that the world we’re living in is anything other than that. So now I’m starting to answer your question. I think this world that we inhabit is just one of those Platonic worlds.
David: It’s just one interpretation of the infinite possibilities?
Hans: It’s not just an interpretation. The interpretation was just the way that I got to this position that these wills that we’re simulating already exist anyway. The simulation is just a way of connecting them to our perceptions. But actually a simulation doesn’t create the world. The world exists by virtue of its own internal logic. The reason for believing that is that with the right interpretation you can look at anything and see any particular world. I don’t have to look at a computer. I don’t have to look at a counter.
I can look at a rock where the particles in the rock are moving randomly because they’re warm. Each state of motion of those particles of the rock can be mapped through some kind of look-up table into a state of some simulation. So you could see this person that we were talking about inside of that rock if you had the right interpretation. You could see them in anything. So what’s the point of saying that the simulation created them? They’re everywhere, and they they don’t care if it’s a rock, a counter, or a computer that’s simulating them in spatial detail. They don’t feel the simulation at all. They only feel the internal logic of the simulation–the mathematical rules that define what’s being simulated, and how its been simulated. So they just exist. This really is Platonic existence.
David: virtue of that logic, what happens when someone dies?
Hans: We have a few more steps. So if our world exists Platonically, but in a sea of other possibilities, you then have to ask the question, why is our world so boring? In the space of all possible worlds, there’s a world in which in the next second you sprout wings on your head, and your nose grows into an elephant’s trunk. There are worlds like that that exist in the space of all possible worlds. So why doesn’t that really happen to us? Why does our world seem to be so boring, so tied to these simple physical laws that we’re only recently starting to elucidate? I think there’s answer to that, which is sort of based on simplicity.
First of all, you note that in some of those worlds you don’t exist, and in some of them you do, but you will never find yourself in one of those worlds where you don’t exist. So, for you, just because of the nature of your consciousness–the way you interpret and experience your own existence–you will only find yourself in that tiny tiny tiny subset of all the possible worlds in which you exist. But from the place you are right now, there are still an infinity of next possible worlds, next moments. Some of them have you with wings and a trunk, but those require a lot of coincidences–so that all that can happen, and your consciousness can still continue. You’ll have be in one where your consciousness still continues.
Now, each of those coincidences that’s required, think of as a coin flip. So if you need one coincidence then a certain thing, basically the more things that have to be just so the lower the probability. The chance that you’re going to find yourself in a place where a hundred coin flips come up just the right way is a lot less than a world where it only requires one coin flip or no coin flips. So new things have to be just so. The simplest world is probably the one that requires no changes at all, where things just keep going the way they are. Now, you have a history of going back to the beginning of the universe, where you have a structure that depends on the laws of physics working just so.
Your neurons wouldn’t work if chemistry changed in some way. Your consciousness is an interpretation of the way your neurons work, and your connection with the world. A lot of your experiences depend on everything working just the way it does. If the speed of light were to change, certain chemical reactions would alter, and your consciousness would probably be gone. But pretty much if the laws of physics were altered in any way, your consciousness would no longer work the way that it does. In those other worlds, if that’s all that happened, you would no longer exist. So you can’t find yourself in those worlds. Maybe some other things could change that bring back your consciousness, but that would be like another coincidence that would have to happen. The odds of that happening are small.
So the most likely world that you will find yourself in in the next moment, is one that’s just continuation of the world that you’re in right now, because nothing has to change. All the mechanism that you have all this investment in–this evolutionary and biological growth investment–just continues. The only other question is, why are we in this kind of world in the first place? And again, now that we’re in it, we’re kind of stuck. Probably it is the case that this is the simplest world, the world that required the least number of coincidental starting positions to produce us.
If you look around it’s not immediately obvious that this world is simple. But if you believe the physicists, they’re telling us that sooner or later we’re going to find a theory of everything, which is some simple equation, that basically describes the underlying mechanism for the entire universe. The evolution of this equation produces everything, and really the world is simple. It’s just that in order to have such a simple description produce us, you have to go through this long process of consequences from that simple starting point, which is the evolution of the universe, the expansion of space, the evolution of life, and on and on–all those fifteen billion years worth. Because, after all, we, as conscious beings of exactly the kinds we are, are pretty complicated.
We have hundreds of billions of neurons, and probably couldn’t be as rich in our mental lives if we didn’t have all of those. They have to be wired just so, and as a kind of a side-effect of having to be just this way to be, and to have our sense of existence. The easiest way for that to happen was for the rest of the universe to happen too. (laughter) So all of this kind of holds us into being physical beings.
That’s all true until the point where the physical existence no longer is working too well–basically the point where we die. Now, in the space of all possible worlds there are certainly going to be continuations of consciousness in some of them, no matter what happens to us, because some of those possible worlds you can simulate. It’s always possible when you have a simulation, and if something happens that you didn’t like, to be able to make some change, and basically undo whatever that thing that you didn’t like, and have it continue.
You see what I mean? So, no matter how we die, in some possible world there’s a way in which we, through some mechanism or other, continue on. And those are the only worlds which we’re going to find ourselves. The others have zero probability for us personally, and this is sort of on an individual basis. Here really has different probabilities. I can find myself in a world in which you died pretty easily, and you can find yourself in a world in which I died, but I can never find myself in a world in which I died, and vice versa. Obviously, we don’t really live in the world, we just have some momentary correlation.
So what does this continuation look like? Suppose you were hit by a truck, and you got flattened. What does the continuation look like? Well, I don’t have great answers, but I can make up some things. What I can’t tell you is which are the most probable, which are the ones that in a total sense require the least number of coincidences, and are thus the ones that you’re most likely to find yourself in. Of course, they’re all real, and they all exist. It’s just that some of them are kind of the equivalent of winning a lottery. You’ll probably never find yourself there.
One possibility that’s kind of intriguing, and let’s you do a personal experiment, predicts rather strange happenings for us individually. Maybe the easiest way for your consciousness to continue in the instance of the truck is that actually the truck noticed you, and blew its horn and you jumped back, and you didn’t get hit after all. Then everything could still go on according to physical law, and just a few chance events had to be a little bit different than they were.
So maybe you will escape that truck, in your personal world, where you continue to exist, and maybe that’s the easiest way for you to continue. Then the next time, maybe some cell that might have produced a cancer that killed you reverted back, because a cosmic ray hit it just the right way, or just some thermal event in the cell. So you escape that cancer. Then maybe some aging related effect miraculously reversed because of some nutrient interaction. Maybe this goes on and on and on, and after awhile, you find yourself the oldest living person (laughter) in the world, having miraculously escaped a number of close calls.
David: I’ve thought something very similar.
Hans: Almost all of these ideas can be found in fiction.
David: It came from the experience of having my car go over a cliff a couple of years ago.
David: I thought I should have definitely been killed in this experience. I thought that perhaps in one universe I was killed, but I escaped into another universe, where I lived.
Hans: Right. You don’t even have to go through to this philosophical position where am I to have the idea of alternative universes, because the many worlds interpretation of quantum mechanics is basically winning over the physics community. The many worlds interpretation is kind of a microcosm of this, because in all those slices of the wave function, we still have basically the laws of physics as we know them. Whereas, the worlds I’m talking about don’t necessarily have the laws of physics as we know them. So it’s a bigger set of possible worlds. But even in the many worlds interpretation, indeed, there should be some in which for almost any event you manage to survive. Maybe eventually it gets too far fetched for that to continue this way, although maybe not. All it really would take is for some aging processes to reverse themselves, which maybe isn’t that big of deal. So that’s one way.
Another way is, maybe you die. Suppose you were to explode in a hydrogen bomb, and you turned into high-speed plasma moving in all directions. Well, maybe it would be too far fetched to continue you that way, as the probabilities required would just be too large. It’d be sort of like the probabilities that all the particles would reverse and reassemble you, just by chance. So maybe another alternative is that you find yourself knowing that this existence that you’ve been having isn’t quite what you thought it was. What it really is is a simulation in somebody’s computer, and when you die, they sort of pull you out of the simulation, and reinstate you in slightly altered circumstances. Either they pull you out altogether into their world, or just into some other simulation. Something like that.
They continue you on. They have the power to do that if they’re running the simulator. Or maybe you just find the logic of your consciousness continuing simply without the need for a bunch of neurons to kind of ape the structure of your thinking. And when we write artificial intelligence programs that are just plain reasoning programs, we don’t simulate neurons or anything. We just simulate concepts, like beliefs and probabilities and so on. So there are just some numbers of strings. I would say that everything that I think could be encoded that way a lot more simply than it is using all those neurons. Now when we do it on the computer, of course, we have underneath those basic concepts.
We still have the computer, which is just as complicated as the neurons, but why can’t those concepts just stand on their own in the appropriate abstract context? We need the computer to simulate the AI because we’re still living in this physical world, so we have the physical substrate for the abstract concepts. But in all possible worlds there certainly will be some where those abstract concepts are all there. Then you could imagine an afterlife that’s very much like the spiritual afterlife that a lot of religions imagine–where there is no physics. There’s only psychology.
David: It’s all mind.
Hans: Right. I think all of these concepts here need further work.
David: I think this is the most interesting answer that I’ve ever gotten to that question.
Hans: But note, all of this allows artificial intelligence. The robot minds are just as real as ours. None of this contradicts it. So those people who try to use this kind of thinking to rule out robots don’t have a leg to stand on.
David: Could you talk a little bit about some of the latest developments in
Hans: The main thing to notice about robotics is that nobody’s made any money doing it yet.
David: It seems like they have in Disney World.
Hans: Okay, there are entertainment robots. But there are no big industries making robots, and selling lots of them. Only some of the companies that have tried to do that have barely survived. Most of them went out of business. Even the Disney robots are not really making a lot of money. Maybe for Disney in the context of the entire park, but not for the companies that are making the robots. There’s a company called Sarcosink in Utah that’s made some of the very best robots that Disney uses, and they’re just a little company, living from contract to contract. But discounting entertainment robots, which have their own kind of economics, we don’t have robots cleaning your floor, vacuuming your rug, cleaning the streets, or delivering packages.
David: Right, all we have are Furbys.
Hans: Or toys. But toys don’t count. You can make a robot toy that doesn’t work at all–like wind-up toys all along–and it can still sell. The main reason we don’t have really good utilitarian robots is that actually doing work in the world is hard–although we never realized how hard it was. But just pushing a broom is very hard. It requires navigational, perceptual and motor skills that are in an absolute sense very complicated, but are cheap, because everybody that we know pretty much has them. In fact, most animals have them, although maybe not the discipline to use them the way that we need.
The reason we have them is because they were life or death matters alln through our evolution. We’ve been practicing for 500 million years, and those individuals that did those things the best were the ones that survived in each generation and passed on their genes to the next. So it’s like we’ve running a repeated Olympics. Only the winners get to have offspring. We have hundreds of billions of neurons devoted to seeing, moving, modelling the world, and socially interacting. That’s just a really hard target. Building a robot to mimic that means we have to rediscover all of those things, and build a mechanism as powerful as we have in us.
We didn’t realize how hard it was, because when we first started building computers we didn’t use them for things like that. We used them for things like arithmetic, which is something that human beings often do badly. We have a hundred billion neurons, but we can only add one number every fifteen seconds. Any competent computer designer could take a few thousand of our neurons and wire them up into an adding circuit, or a more general arithmetic circuit, that could probably do a thousand calculations a second. If they took a large fraction of our hundred billion neurons and wired them up, they could make a calculator that could do a trillion calculations a second. Yet we manage one every fifteen seconds. We’re inefficient by a factor of about a quadrillion.
David: Unless you happen to be an idiot savant.
Hans: Right. But even there, they might be able to do it in a few seconds. That’s inefficient by a factor of a hundred trillion, instead of a factor of a quadrillion–still vastly bad. On the other hand, our neurons probably couldn’t be wired much better for moving around. The neurons in our visual system are probably close to optimum in how they’re organized to let us see things, because evolution’s really been working at that. Evolution, of course, didn’t give a damn about whether we could multiply two numbers. It probably wasn’t an issue at all. It’s just a side-effect of some of our general purpose thinking ability. But it’s very weak.
The general purpose part of our thinking is extremely weak compared to the specialized parts of our thinking. But the specialized parts of our thinking are only good for things that we’ve been doing for many millions of years. So when computers first did arithmetic it really seemed that these were powerful thinking machines. At first doing arithmetic was considered thinking. After all, who else but an intelligent person could do arithmetic? Then when the first AI programs started being written in the Fifties and Sixties, the computer still seemed pretty powerful. They were able to solve these new mathematical problems, intelligence test problems, andn intellectual games about as well as a single person.
Already there’s a little bit of a let-down there. (laughter) We went from thousands of mathematicians to one freshman. Then when the computers were used in the first robot set-ups, using cameras to look at a table top with blocks on it, and an electric arm to try to pick up those blocks, it got much much much worse. It took an hour of staring at the table to find a few blocks. Then it could pick them up about one time out of three. There was a lot of puzzlement about this.
We started this conversation talking about John McCarthy, who thought that an existing computer would be powerful enough to do general intelligence. Basically he still believes that. This was an opinion he formed early on, during this time when computers still seemed to be prodigiously powerful. But it’s dead wrong. All that really revealed was how simple the intellectual tasks we did really were. They only seemed hard when we do them, because we’re so bad at it. (laughter) But with robots it’s just the opposite. Robots are trying to do the things that we do extremely well. So it’s very hard.
What’s more, because the things we do extremely well are also extremely common, in that every person can do them, the economics are terrible. That is, you can’t pay a robot more than you pay a person (laughter). Whereas a computer that does the job of a thousand mathematicians, you could afford to pay a few million dollars for. It only slowly dawned on everybody that this was the case, that robotics was just much much harder then these highfalutin intellectual tasks that computers were first applied to. Up until the Seventies, computers were still big things that cost hundreds of thousands of dollars, minimum, usually millions. They were also just physically big. There was no plausible way of using a computer to control a robot in any kind of commercial context, because they were just too expensive.
Even if the robot worked well, it would only be doing the job of one person. So there were no computer-controlled robots, other than a few in research labs. There were none in industry in the Sixties or in the Seventies. Then at the end of the Seventies microprocessors appeared, and by the early Eighties there were some robots with small microprocessors in them. They allowed a kind of behavior in the robot that was on the low end of insect complexity. In order to build a vehicle in the Sixties that could deliver something from one place to another in a factory automatically, you had to bury a wire in the ground, and have the wire emit a signal that could be sensed by simple coils on the robot.
When it became possible in the Eighties to put microprocessors in the robots, then they could have optical sensors that looked down at the ground. As the vehicle moved, optical sensors could note the black and white tiles as they flowed by. The microprocessor could count how many tiles went by, and guide itself basically by the patterns on the floor, which is much trickier than following a wire that’s buried exactly along the path that you need. There were some other navigation methods that involved putting navigational reflectors around the spaces where the robot was moving, so that a laser on the robot could sense them from the light that they reflected. By seeing three of them at the same time it could triangulate its position. Then you could program it to simply go from position to position using these reflectors as a guide to where it actually was.
You could also have the robot follow a wall using sonar or infrared proximity detectors to measure the distance of it, or the sides of a door. A number of those things were tried. None of them were successful commercially because they all required a specialist to come in and specially program the robots with a particular place, and a particular path that it had to traverse. Now, there’s a few factories where it pays off. So if you have a large factory, where the roots are stable, then it’s worth paying the hundred thousand or so it will probably going to cost you to get the robot installed. But in most other places things change too often, and it’s just not worthwhile bringing in a person to program the robot each time something changes. Besides that the factory owners are very nervous about having to depend on somebody outside who may not be there next year.
So it was a very hard sell. Only dozens of these insect-like advanced robots were sold. Ultimately, I guess, there were hundreds. Most of the companies just went out of business that tried to do this in the Eighties. Then in the late Eighties and early Nineties another kind of robot started appearing in research settings. This robot didn’t use navigation techniques like I’ve just described to find its way around. Instead it was able to map the world around it using sensors in a very general way, and was able to actually navigate by these maps that it built itself. In principal, with the right high-level program, these robots could be put into an entirely new place, and still do the right thing.
We worked on that in the Eighties ourselves, and still into the Nineties. All that was possible with the amount of computer power that we had then–about a million calculations a second–was the ability to build maps like this in two dimensions. But this handles ninety percent of the problems, because if you build a map that’s at the height of the belt-line of the robot, it’ll contain most of the obstacles that it’s going to meet–the main walls and furniture. This will allow you to both plan sensible paths by matching up large areas from one time to the next, to localize yourself from one time to another.
So for instance, you’d have a robot that someone would lead through a path, and it would memorize the maps that happened along that path. Then, next time, when the robot was on it’s own, it would just recall those maps that it memorized during the training, and match them up to the current maps that it was getting–slide one against the other until they lined up the best. Then it would know where it was now compared to when it was trained, and it would know where it should be. But it took just about all the computation it could do with one MIP, and it was almost not possible to do in real time, or just barely possible. By the early 1990’s reasonably inexpensive computers had gotten up to about ten million instructions per second (MIPS), and then the two-dimensional maps got relatively easy to do.
Nowadays you can find a lot of robots cruising research hallways that do their own mapping. They wouldn’t have to be specially installed if they were used to do something practical–except for one problem. With a two-dimensional map there are places where the world gets ambiguous and the robot can be confused. A two-dimensional map is typically made up of cells, dividing the world into a grid of cells, and noting what’s in each cell–sometimes just where it’s empty or occupied, or a probability that it’s occupied. But it only had a few thousand cells, and with such a fuzzy and low resolution picture there are ways for that picture to be wrong, which makes it look like another picture as it were. So the robot can be confused about where it is, or it can miss important things in its surroundings. The chance of that is fairly low, but when a robot’s cruising around for hours, days and weeks, even a low probability eventually bites. The mean time between even the best two-dimensional mapping robots screwing up seems to be about a day–which means its good for a demonstration, but it’s not good for practical use.
These insect-like robots that began to appear in the Eighties can be installed on a company’s premises. These are robots that deliver, clean floors, or act as a security guard. If it works fine for a month, but then at the end of a month it gets into trouble, wanders down the wrong corridor, gets stuck in a corner, or in some cases falls down the stairs, the customer is no longer interested in it (laughter), and it’s out the door. It’s had its one month of testing and it failed. But if the robot manages to do its job for about six months, then if it fails once, that’s okay. It becomes part of the family, and it earns a sick day. (laughter)
So that seems to be the practical target needed. The reliability needed to get a robot that’s commercially successful, or acceptable, is about six months between failures. The two-dimensional mapping robots just don’t seem to be able to do that. From the time that we came up with this grid-mapping idea in the early Eighties, I’ve always wanted to do it in three dimensions. The only problem is that with a three-dimensional map you’d have about a thousand times as many grid cells as in a two-dimensional map–not only because you just have the third dimension, but because also you almost certainly want the cells smaller. In two dimensions the world is fuzzy, because things like door knobs stick out of the wall–depending on whether you look just above the door knob, or right at it.
You see something, or you don’t see something, and that just happens all over the place. The world is sort of bumpy when the slice that you’re looking at it is broad, so there’s no point in making the grid cells much smaller than about six inches for the two-dimensional maps. But in three dimensions the world is consistent, and you probably want cell sizes more like a centimeter or two, because then a lot of things become possible. In fact, in our experiments, the smaller the cell size, the better everything works.
The only problem is the amount of memory you need, and the amount of computation goes up rapidly as the cell sizes go down. Each doubling of the cell size increases their number. Each halving of the cell size in a three-dimensional map increases their number eight-fold–because if you take a cube, and divide it in half horizontally and vertically one way, and then the other way, you get eight small cubes.
So even the most conservative extension into 3-D pretty much multiplied the number of cells by a thousand, and we were just barely able to do the 2-D maps. It looked like it was going to be a thousand times as much computation to do 3-D. I really, really wanted to do it though, and wanted to at least get some experience doing it, even if it wasn’t going to be practical for another ten years.
In 1992 I went and did a sabbatical at Thinking Machines Corporation in Cambridge and Boston to use their super-computers. They were doing real well at that time. Now all their business has been taken away by big companies like IBM, and there’s not much left of them. But at the time they were making these super-computers that consisted of a whole lot of small computers wired together in a big network. You could have as many as a thousand of the conventional small computers, and thus could have about a thousand times as much computer powers as is typical.
Instead of using their super-computers a lot when I got there, I ended up finding a series of about six tricks–some economies of scale, some ways of doing things outside of the main computational intensive loops–that together resulted in a program that was actually about a hundred times as fast and efficient as I thought it was going to be. So this factor of a thousand wasn’t really such a problem any more. On top of that I found that the work station I was working with could do about twenty million instructions per second, whereas the mainframe, that I’d been using back at Carnegie Melon, which is sort of old, could only do million. (laughter)
So my computer speed had basically multiplied twenty-fold, and my program had multiplied a hundred-fold. Together I had a thousand already in my hands, and I basically had a program that could build three-dimensional maps. It was already fast enough for research, although not quite ready for commercial use. It was only the core of the code though. Then there were distractions for the next few years doing various other things, including the books.
In 96 I did another sabbatical in order to concentrate on the next step. I built a front-end for the 3-D grid program that took stereoscopic views, and found about twenty-five hundred points in each image. Then I projected them into the 3-D grid, where the data from all these measurements accumulated. The results that I got really made my day. They were just as good as I’d hoped–at the upper end of my expectations. The speed was such that I could process a glimpse of the world in two to five seconds on the work station I had at that time, which could do a hundred million calculations a second. That’s not really fast enough for a practical robot, but getting real close.
With a machine that could do a thousand million calculations a second, I could process a glimpse faster than once a second. That, in my opinion, is fast enough for a slow-moving indoor robot. Basically a few glimpses is all that’s necessary to build a pretty dense three-dimensional map. There were more distractions when I got back. This was in 96, and until 97 and 98. But I felt more and more the urgency to bring this to actually get a prototype that could start a commercial venture. Fortunately it looks like I’ve now got funding for three years to do just that single-mindedly. By the way, you can find all this on my web page if you want more detail.
David: Is this what you’re currently working on?
Hans: What I’m starting to work on. I’m finishing off another contract that’s sort of unrelated. The contract for this new money isn’t signed yet, although it’s already been decided–just the paperwork hasn’t arrived yet. And that starts almost immediately.
David: How do you see robotics helping to extend human life span in the near future?
Hans: Oh, I don’t think it has much to do with it in the near future, except in that robots are helping in the biomedical labs. A lot of molecular biology is done by these little laboratory robots that do hundreds of tests at once.
David: I love the image in your book of a bush robot, with a trillion different fingers, operating on every cell in the body simultaneously.
Hans: That’s a long term thing, although we’ve actually got a contract to study that idea. We built two models. You can find it all on my web page. When you go to the main page you’ll find a link to publications, and in there there’s a link to everything. One of the first entries is the final report for that NASA contract, the Bush Robot contract. In the preface and a couple of other sections you’ll find pictures of these models, as well as the theory behind it.
There’s also a proposal for research that, I believe, will lead to this commercial prototype, and the plans to how to go on from there. It leads to what you find in chapter 4 of my book–which is initially industrial robots that can be installed and redirected to new tasks by ordinary factory workers. For instance, point-to-point delivery robots, where somebody just shows it where to go basically, and then it’s able to do that reliably for at least six months at a time, being able to deal with all the contingencies that are likely to come up over that long a period. Floor-cleaning robots were made by several companies in the Eighties, but none of them succeeded in finding a real market for them, although a lot of them were tested in places.
The problem is that the floor-cleaning application just doesn’t warrant a specialist being being called in (laughter) to map the particular area that has to be washed. What you’d like is something where a maintenance supervisor can manage a fleet of a half dozen or a dozen cleaning robots. Someone can get them started at night, and then each one would be doing a different room or corridor. It would just have to be started up, then it would handle the rest of the job itself. These would be machines that wash and scrub the floor, then suck up the water and recycle it. These could be used at night for cleaning large areas.
There are currently about a hundred security robots in use that patrol warehouse. They are connected by radio to a central guard station, where they send a light and a bell if they detect any motion. The central guard can control a bunch of them, and the robots themselves are widespread over a series of warehouses.
If these robots were smart enough to be used by somebody that’s not specially trained, then they’d have a much bigger market. Then it would actually pay to have a robot, because they would be cheaper than using a person. The first product that I think will comes out of this research is something that fits on to existing vehicles. I imagine it’s about the size of a basketball, and has cameras looking out in all directions, so it doesn’t have to scan or anything. The cameras are already very cheap, and they’ll be even cheaper in a few years. These are little Seamose cameras. The existing vehicle itself–whether it’s a cleaning machine, a delivery robot, or something else like a forklift–is modified so that all it’s main controls come to a plug.
>From that plug you can get power, control the drive wheels and
and also receive information from any sensors that it has. The plug then connects to a unit that this company will make, which is a standard navigation head with these cameras around it. Inside the head is enough computing power–at least a thousand MIPS–to build three-dimensional maps from the views seen by the cameras. It will have another layer of programming that extracts important information from the three-dimensional maps, which are built from things like the location of the floor, the walls, the doors, and probably people and some kinds of furniture. Then there’s a third layer–the application-specific layer–which makes the robot into a delivery robot, a floor-cleaning robot, or whatever it is that it has to be.
What’s different about the programming in the robots I’m describing, as compared to the robots of the Eighties, is that it’s application-specific, not location-specific. That is, it’s not made for a particular place. It’s designed to be able to learn a new place when it’s brought there. I figure that maybe there’s a market for a few hundred thousand of those, because there are a lot of delivery robots in use, but not as many as there could be. There are tens of thousands used in factories and in warehouses, but they could be used in a lot more places if they were more flexible. You could use them in smaller factories, and in places where there’s more chaos and things change a lot. If they were found useful on forklifts–maybe as a safety assist, or even to automate forklifts–then the market could be in the millions.
But even hundreds of thousands are sufficient, in my view, to develop the technology far enough along to make it credible enough to raise the capital to develop the next round of products, which are consumer products. I figure that could appear somewhere between 2005 and 2010. The industrial navigation head is about for 2005.
Then, after the vacuum cleaning robot, you have a series of more advanced utility robots that start to have arms. They can clean horizontal surfaces, maybe toilets, and can fetch things and put them away. As they become more capable, and are able to do more than one thing, they will eventually become the first generation of universal robots. I still think not too much after 2010 is a possibility for that, although we have a little work cut out for us to achieve that. But, you see, it won’t seem quite so formidable once there is a real mass market for robots. Right now it looks devastatingly hard because there is almost no market–i.e., commercial money–for robots.
If we didn’t have some government research we’d have almost nothing. But that’s going to change. I’ve been waiting for this day for thirty years. I think we’re just about there now. The main reason, of course, being that we have finally enough computer power. The computer power that you can put on a robot is now in the hundreds of MIPS, and will be in the thousands within a few years. That’s enough to give a robot barely more than insect intelligence, sort of the very lower end of vertebrate scale intelligence, which is enough to do basic utility functions. Then you have the evolution that’s outlined in my book of the first, second, third, and fourth generation robots in additional decades.
David: Your books–especially Robot–really changed my world view. There’s only been a handful of books that have done that for me.
Hans: I’m really pleased to hear that.
David: You must be astonished at how few people seem to grasp how much the world is going to change in the next century.
Hans: I’m not really astonished. Part of the reason is that I realize it takes awhile for these ideas to percolate. Just the idea of downloading the human mind into a computer, which, I guess, got its biggest exposure with my first book, is still peculating. For some people it’s no longer a big deal. But back then not everybody got it, and a lot of those that did were outraged.
Hans: Oh yeah. You should have seen some of the things that Joe Weisenbaum wrote.
David: I don’t understand. Why?
Hans: Well, it’s obscene.
David: You mean unnatural?
Hans: Yes. It reminds some people of the holocaust. I mean, the idea of turning people into machines–what could be more horrible than that? Some people still react that way, but it’s definitely mutated now. It’s not such a big deal anymore. I figure that with the new book it’s going to take awhile for people to absorb some of that–especially, of course, the last chapter, which I think most people just shrug their shoulders about at this point. But I’m serious.
David: I found the last two chapters the most interesting of all.
Hans: Some people have said that, and that’s real heartening. But I know it’s a minority that react that way. Few people are in the mental state to realize the implications of our present research and development.
David: I find it interesting that you say that. I know quite a few people who love your books.
Hans: Well, you know there are religious positions. But there are also just basically conservative positions–people who are threatened by technology, and this is the ultimate threat (laughter). This substantiates all of their fears. If you’re nervous about the technology, then the idea that it’s going to become more powerful is just threatening in itself. And the idea that it’s going to affect you personally in this intimate way is certainly threatening. So there’s some real extreme reactions, and even some reviews that were just wildly outraged.
But part of it is a little bit like the stages of grief. First there’s denial, then there’s anger (laughter), and then there’s sadness or something. I’m not sure what the stages are, but there’s a whole range of emotions to pass through before you get to acceptance. So I’m willing to be patient. One of the good things about publishing with places like Harvard and Oxford is that, although they don’t promote the book nearly as much as some of the commercial publishers do (if you’re lucky, as of course, some books get short-shrifted even by them, because they just don’t have the budgets or personnel to do serious promotion), but at least they keep the book in print for a long time. So Mind Children is still selling quite well actually (laughter).
The trouble with the Mind Children title was, of course, that a lot of people didn’t get it at all.
Hans: They didn’t know what the hell to make of it. They wondered if it about baby sitting or what? (laughter)
David: Oh, I see, minding children. That’s really funny.
Hans: And then the cover pictures were not particularly helpful.
David: But the subtitle–The Future of Robot and Human Intelligence–was fairly large on the cover.
Hans: Right, well that’s true. But when you just see it on the shelf you know don’t. It looks odd. So I think the Robot title possibly gets it to a larger audience. I’m especially looking forward to what the effect of the Stars Wars movie will have on it (laughter), because I think there was a sort of euphoria about robots in the early Eighties, and that’s when some of these companies I was talking about were formed.
There were a lot of hobby kits and toys made by companies for programmers or robot hobbyists. Heathkit made The Hero. Commodore made Minarobot. There was a Axlon made of a bunch of robots. This was all in the early Eighties, when there was sort of a robot euphoria. I think at least half the interest in robots was caused by Star Wars movies, which put robots in people’s minds. The other half, of course, was caused by the early success of–as they called it at the time–the hobby computer.
David: Are you planning on writing another book?
Hans: I promised one in 2008 in the current book. So I think I’m committed to doing that one. But no, a hundred percent of my effort is going to go into finishing off this development of the 3-D navigation to the point where we have a laboratory prototype of a commercial product. And I feel really good about it. I think all the ducks are in a row now. We have the computer power, and we’ve been working on it at a sedate pace for thirty years, getting all the pieces ready. Now they’re all there, and they work. The results I got in Berlin were enough to totally convince me that with some additional polishing this is going to be just fine. I’m sure this isn’t going to be the last word in how to do this, but this is definitely going to work well enough to get something out the door–something that understands the world enough to be able to move around in it for a few months at a time.
David: I can’t wait to have my own personal robot.
Yeah, a lot of people are waiting for the vacuum cleaning robot (laughter). There’s a real pent-up demand for that. When I talk about this idea to various audiences I get different reactions. If I talk to students, they sort of say, oh, that’s pretty good when we get to the vacuum cleaning robot. If I talk about it to a group of older researchers, they say about the same thing. But when I talk to a mixed audience that’s middle-aged, usually I get spontaneous applause from the whole audience (laughter). Actually past the audience. I get applause from some people that actually do the vacuuming at the lecture halls.
David: I just interviewed somebody a few weeks ago that’s trying to develop realistic sex robots. He’s a talented special effects artist that does very realistic silicone representations of women with internal skeletons.
Hans: Oh yeah, RealDoll.
David:Yeah, they’re adding animatronics now (laughter). They have animated tongues, pelvices that gyrate, and he wants to add more.
Hans: (laughter) Well, that’s interesting, but not my market. (laughter) That’s movie background right?
David: That was how Matt McMullen, the person who makes them, got started. Now he is making a fortune selling his dolls over the internet.
Hans: I’ve seen his site. But yeah, that definitely is not my direction.