when you go through an abstract reasoning, the main way you make the reasoning abstract is by leaving out certain details, and just using other aspects of the world situation and deriving results from there on the basis of rules of inference.
But sometimes you leave out the critical things, and it’s not obvious at first that what you left out was important. So the intermediate results of chains of reasoning can brought back and instantiated in the simulation of the world, to see if they actually make sense there. So if some chain of reasoning has lead the robot to believe that you could support, let’s say, a glass by standing it on a broom, then trying that in a simulation would show that in fact that doesn’t work. The thing will always fall down. Each instance that’s tried in the simulation wouldn’t work, so the robot could then just disregard that particular chain of inference, and save itself a lot of effort coming up with more derivations from it, that would be similarly nonsense.
There has been programs in the past, in the Sixties, that were able to do things like that in much simpler domains. One of the best is the geometry theorem-proving program of Herbert Glanter. He wrote a program that did formal inference, going from Euclid’s propositions and proving theorems. But as it did each step in such a proof, it also in parallel drew the equivalent of a diagram, actually using analytic geometry, representing points as two numbers, with X Y coordinates, and lines as pairs of such points. Then testing whether two lines intersected by doing the appropriate kind of arithmetic with the coordinates of the end points, and seeing these two lines with same length by calculating the sum of the squares of the differences in X and Y.
Within the numerical accuracy checking, if things that it was trying to prove, such as that line A equals line B in length, were actually true in the particular instances that it was drawing, within, and the drawings were all approximate that the numbers were not done to infinite precision. So if two lines were the same they had to be the same within six decimal places or whatever. But if they were then it was still plausible to continue the proof, trying to prove that they were the same.
But if they were not the same in the diagram, then there’s obviously no point in going on with that line, and that was extremely important. That’s what made the Galearnger’s so good was that it was able to prune the vast majority of logical directions, because they didn’t actually work in the specific examples. So obviously they were not true in general. So the forth generation robot, I think, will work that way too, only reasoning, but much more complex things, like maybe the physical world around it.
David: The idea that fascinated me the most in Mind Children, was what you said was the inspiration for writing it. You discuss the possibility that, gradually, section-by-section, we may be able to download our personality, memories, and sense of self into a superior electronic computer.
Hans: Yeah, a lot of people like that. I’m actually a little off on that myself, in that it sort of strikes me now as building a car by starting with an ox cart. And the ox cart is us, the old design, back to the stone age. Then replacing the wheels with rubber tires, the ox with a motor, and the sideboards with sheet metal panels. When you’re all done, you have something better than ox cart, but you still don’t have a very good car. If you were to sit down instead with a fresh drafting board, or on a design screen, and using the best engineering knowledge you had available, design a car, from the ground up, then you could build a much better car than replacing the ox cart.
David: It wouldn’t be a better design, but the idea is that we could transfer ourselves into it.
Hans: I think of that as kind of a frivolous thing to do. I mean, we’ll do it probably, but it will be like a tourist thing. It’ll be like a love