We started this conversation talking about John McCarthy, who thought that an existing computer would be powerful enough to do general intelligence. Basically he still believes that. This was an opinion he formed early on, during this time when computers still seemed to be prodigiously powerful. But it’s dead wrong. All that really revealed was how simple the intellectual tasks we did really were. They only seemed hard when we do them, because we’re so bad at it. (laughter) But with robots it’s just the opposite. Robots are trying to do the things that we do extremely well. So it’s very hard.
What’s more, because the things we do extremely well are also extremely common, in that every person can do them, the economics are terrible. That is, you can’t pay a robot more than you pay a person (laughter). Whereas a computer that does the job of a thousand mathematicians, you could afford to pay a few million dollars for. It only slowly dawned on everybody that this was the case, that robotics was just much much harder then these highfalutin intellectual tasks that computers were first applied to. Up until the Seventies, computers were still big things that cost hundreds of thousands of dollars, minimum, usually millions. They were also just physically big. There was no plausible way of using a computer to control a robot in any kind of commercial context, because they were just too expensive.
Even if the robot worked well, it would only be doing the job of one person. So there were no computer-controlled robots, other than a few in research labs. There were none in industry in the Sixties or in the Seventies. Then at the end of the Seventies microprocessors appeared, and by the early Eighties there were some robots with small microprocessors in them. They allowed a kind of behavior in the robot that was on the low end of insect complexity. In order to build a vehicle in the Sixties that could deliver something from one place to another in a factory automatically, you had to bury a wire in the ground, and have the wire emit a signal that could be sensed by simple coils on the robot.
When it became possible in the Eighties to put microprocessors in the robots, then they could have optical sensors that looked down at the ground. As the vehicle moved, optical sensors could note the black and white tiles as they flowed by. The microprocessor could count how many tiles went by, and guide itself basically by the patterns on the floor, which is much trickier than following a wire that’s buried exactly along the path that you need. There were some other navigation methods that involved putting navigational reflectors around the spaces where the robot was moving, so that a laser on the robot could sense them from the light that they reflected. By seeing three of them at the same time it could triangulate its position. Then you could program it to simply go from position to position using these reflectors as a guide to where it actually was.
You could also have the robot follow a wall using sonar or infrared proximity detectors to measure the distance of it, or the sides of a door. A number of those things were tried. None of them were successful commercially because they all required a specialist to come in and specially program the robots with a particular place, and a particular path that it had to traverse. Now, there’s a few