six months, then if it fails once, that’s okay. It becomes part of the family, and it earns a sick day. (laughter)
So that seems to be the practical target needed. The reliability needed to get a robot that’s commercially successful, or acceptable, is about six months between failures. The two-dimensional mapping robots just don’t seem to be able to do that. From the time that we came up with this grid-mapping idea in the early Eighties, I’ve always wanted to do it in three dimensions. The only problem is that with a three-dimensional map you’d have about a thousand times as many grid cells as in a two-dimensional map–not only because you just have the third dimension, but because also you almost certainly want the cells smaller. In two dimensions the world is fuzzy, because things like door knobs stick out of the wall–depending on whether you look just above the door knob, or right at it.
You see something, or you don’t see something, and that just happens all over the place. The world is sort of bumpy when the slice that you’re looking at it is broad, so there’s no point in making the grid cells much smaller than about six inches for the two-dimensional maps. But in three dimensions the world is consistent, and you probably want cell sizes more like a centimeter or two, because then a lot of things become possible. In fact, in our experiments, the smaller the cell size, the better everything works.
The only problem is the amount of memory you need, and the amount of computation goes up rapidly as the cell sizes go down. Each doubling of the cell size increases their number. Each halving of the cell size in a three-dimensional map increases their number eight-fold–because if you take a cube, and divide it in half horizontally and vertically one way, and then the other way, you get eight small cubes.
So even the most conservative extension into 3-D pretty much multiplied the number of cells by a thousand, and we were just barely able to do the 2-D maps. It looked like it was going to be a thousand times as much computation to do 3-D. I really, really wanted to do it though, and wanted to at least get some experience doing it, even if it wasn’t going to be practical for another ten years.
In 1992 I went and did a sabbatical at Thinking Machines Corporation in Cambridge and Boston to use their supercomputers. They were doing real well at that time. Now all their business has been taken away by big companies like IBM, and there’s not much left of them. But at the time they were making these supercomputers that consisted of a whole lot of small computers wired together in a big network. You could have as many as a thousand of the conventional small computers, and thus could have about a thousand times as much computer powers as is typical.
Instead of using their supercomputers a lot when I got there, I ended up finding a series of about six tricks–some economies of scale, some ways of doing things outside of the main computational intensive loops–that together resulted in a program that was actually about a hundred times as fast and efficient as I thought it was going to be. So this factor of a thousand wasn’t really such a problem any more. On top of that I found that the work station I was working with could do about twenty million instructions per second, whereas the mainframe, that I’d been using back at Carnegie Melon, which is sort of old, could only do million. (laughter)
So my computer speed had basically multiplied twenty-fold, and my program had multiplied a hundred-fold. Together I had a thousand already in my hands, and I basically had a program that could build three-dimensional maps. It was already fast enough for research, although not quite ready for commercial use. It was only the core of the code though. Then there were distractions for the next few years doing various other things, including the books.
In 96 I did another sabbatical in order to concentrate on the next step. I built a front-end for the 3-D grid program that took stereoscopic views, and found about