A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Hans Moravec – 2

I describe that in chapter 4 of the new book; how to build up the right structure through a series of layers. The first generation universal robot has basic functionality. The second generation has a conditioning system which causes certain events to reinforce behaviors that it did, and other kinds of events to prevent behaviors in future that did in past. So you basically have a thing that you can interpret as desirable and undesirable, which shows up in a very clear way in the behavior. Then you have a third layer in which there is a simulation of the world, and you can look at the elements of the simulation as beliefs about the world. Then there’s a forth layer in which those beliefs are made even more explicit in as a propositions, as things that are used to reason about.

David: So you think consciousness occurs in the stage where the robot begins modeling the world?

Hans: Well, the third generation is the stage in which you can interact with the robot in such a way that it can actually describe how it feels, because it plays scenarios in its head. The scenarios produce conditioning effects from their second generation conditioning system. If the appropriate words are attached in the obvious way to negative and positive conditioning, it can already tell you that it likes this and dislikes that. If the third generation of world models also includes psychological descriptions of actors in the world, then I think the model that the third generation robot builds actually has three kinds of information about the world.

One is strictly physical. For instance, the robot can model if it drops something that it would fall, and if it spilled water that it would spread, and so on. Then there would be cultural information, which is the meaning and the use of various things in the world, so you don’t use the fine china to empty the toilet and so on. Then there’s the psychological description of the world, for things that are actors, primarily human beings and probably other robots of its kind, which is a short hand way of describing how they behave, because the full description at the mechanical level just is much too complex.

So the robot can no more have a neural description of a human being than you or I could. But it could have a description which says John likes tea, and likes to sleep, and does not like red furniture, and so on. Also for psychological states the robot could infer things like John is happy right now, or John is angry. And those same same psychological models could be applied, and probably tuned a little bit, to other robots, and even to the robot itself. So it could examine its own behavior using these psychological models and say, I don’t like to fall down the stairs, or I like to please my owner, because that exactly summarizes the behavior that it has.

This means that you could have a conversation with it about what it likes, what it doesn’t like, what you like, and what you don’t like. It could also relate to you events in its past that illustrate these states of mind. I think it would be no trick at all to begin to empathize with it, and to say, well, this is actually an interesting person, and clearly conscious. It would take great mental effort to keep reminding yourself, well, this is not really consciousness. This is just the operation of this program behind it. In fact, it would make your interaction with the robot much less effective if you kept interrupting yourself with that kind of irrelevancy.

Now, the third generation robot has only a very literal kind of knowledge about the world. Everything that it thinks about is in terms of concrete objects, specific cuts, specific tables, specific kinds of motion, and so on. So it’d be a little simple minded when you were speaking with it. You couldn’t talk to it about large generalities.

The fourth generation robot adds real intelligence to that by having a layer in which things extracted from the simulator can be abstracted and reasoned about. Interesting interaction though between the reasoning system and the hard simulation, which is that sometimes

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Leave a Reply