A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Hans Moravec – 2


Hans: The position I take in the new book is that consciousness is not really an objective property–though ultimately I decide nothing is an objective property. Existence itself is subjective. But before you get to that stage, you can look at an entity like a robot that maybe exhibits behavior that could be interpreted as conscious. But you can look at it in a strictly mechanical way if you want, especially if you understand enough of the details of the internal mechanism. You might be able to fully explain its behavior in purely mechanical terms, such as cause A produces effect B, which produces internal cause C, which produces D, and so on. Just a chain of simple causes and effects, and that explains everything about the robot.

So some people look at machines and say that’s all they are, and therefore they can’t be conscious, because they’re just mechanical. But I answer that you can also look at a human being that way if you understood them well enough. Neural signal A causes electrochemical events B, and so on, and just a chain that way. I even imagine that some day there will be entities able to process vastly more data than we can, and they could be intelligent enough to look at us exactly that way, as if we were just these clockwork mechanisms. They could interact with us on that basis without ever forming an interpretation about our thoughts or feelings or so on. We’re just causes and effects.

Interacting with such a thing would be interesting, because, probably, most of the time, it wouldn’t be that different from interacting with a person. If that super-smart entity wanted you to do something, he would calculate what mechanical things they should do to you to cause you to do that thing. But probably the easiest mechanical things they could do to you is make certain certain sounds at you, such as please pick up that thing. Then calculating the effect in the long run they would probably also say thank you afterwards (laughter). In their minds that would just be a string of craftily constructed sounds, right? That wouldn’t have the psychological implications that we put on it, but yet it doesn’t really matter does it? Their interaction with us would be effective.

The only reason it would be strange is because sometimes they would be able to figure out something to do to us that’s not the usual kind of interaction that our psychological models would suggest. So maybe there’s a certain song that–as far as we’re concerned–they could sing, which has nothing to do with what we ended up doing in response to the song. It might seem like some kind of subtle mind control because that’s another path that isn’t contained in our psychological models of each other.

So, in the same way that you can look at a human being, either psychologically, as we’re able to, or mechanically, as we’re not quite up to yet, you should also be able to look at a robot in various ways–including the mechanical, as the engineer that built the robot probably would, or in the psychological way, as probably most people interacting with the robot on a casual day-to-day basis would. If the robot says, “My energy stores are low, and I’m really feeling down today. My servos on my right side are not functioning correctly, and I’m just not feeling well.” Then, ultimately, it would take a very hard heart basically not to sympathize a little bit–especially if it does this consistently, and also asks you about your feelings (laughter), and responds to the answers you give in the appropriate ways.

I think that’s completely reasonable. You can map psychological properties onto the behaviors of the robot regardless of the mechanism that causes them, because, really, who cares what the mechanism is? There is a mapping from the psychology to what it does, that makes complete sense, and is for us undoubtedly the most effective way of interacting with a machine. So I’m saying the psychological properties are not really an objective thing. They’re a way of looking at something. Once you are open to that you realize that you can actually look that way at lots of things, if you wish to. Sometimes its not the most effective way.

Basically we have mental mechanisms for dealing with things in the world. We have one set of mechanisms for dealing with inanimate objects. They tell us how to pick up sticks, throw stones, put things together, make houses, and so on. But we have another set of mental mechanisms for dealing with the other people in our tribe. There we worry about whether they like us or don’t like us. Or how we feel about them, and whether they’re in pain. We feel for them when things like that happen. Or maybe we’re angry at them, and we enjoy it when they get hurt, and so on.

Those are a different set of tools, and usually we keep them kind of separate. We’re actually upset when somebody inappropriately uses the mechanical interpretations on us. One of the things you can do under the mechanical interpretations, and it’s perfectly all right, is to hurt inanimate things, to break them, whereas when you do that to living things there are more serious consequences–because they might fight back, or their relatives might come and get you, or whatever.

So in day-to-day life it’s often dangerous. In fact, if, because of some mental-brain defect, somebody tends to treat other human beings in this mechanical way, we usually call that psychosis. Those type of people can be dangerous because they have no feelings. So I think some of the natural defenses against such things in some people get called into play when people talk about building robots, and then interpret them in human ways. With a little bit of corollary you might be able to interpret humans in mechanical ways, and that could be a dangerous thing in society. We have instincts for that, because there are ways in which that could be done where it is indeed dangerous, things that have presumably come up regularly in our evolutionary history. So we have instincts that tend to make us defend ourselves against that kind of thing.

David: How do you think the internal experience of consciousness is created?

Hans: Oh, there’s another aspect to the interpretation of consciousness, of basically something having a mind, namely that it has beliefs and feelings. Those are also attributions. But one of the other things that you attribute to it is the ability to make such attributions. When you look at something in a psychological way, and it’s something that you interpret as complex as a human being, then it can look at other things and basically project psychological properties on to them. That’s part and parcel of the interpretation, that it’s able to make those kinds of interpretations, and it’s able to make those kind of interpretations about itself. Within the abstract interpretation of psychological properties is the ability to make abstract interpretations of psychological properties, and also of itself.

So you have this cycle where the being is itself believing itself to be conscious, and believing itself to have feelings, and feeling itself to have feelings. All right, so I think that’s what it is, it’s a way of looking at the world and a way of looking at ourselves, which includes, of course, looking at ourselves that we have the ability to look at things. So it’s no more real and no less real than that, and you can have that in a program. In one way of looking at it, you can sort of prime the pump, in that you build a machine from the ground up, and it’s all mechanical at first, because that’s all you’d built.

You just build mechanisms that effect other mechanisms and act in a certain way. But you’ve built it in such a way that it’s easy to make the interpretation that certain parts of this mechanism represent beliefs. Like there’s a string of memory cells here, and you interpret them in whatever language is being used to store things as meaning something. Some of the meanings are “I believe this. I believe that A is B.” Then other memory cells represent feelings. So if some number is zero I feel good, and if some number is large I feel bad. These are most natural if you have a framework in which there are certain kinds of actions that result from the states of the beliefs being such and such.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Leave a Reply