Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Ray Kurzweil

hundred thousand chunks of knowledge, or a hundred thousand patterns. He’ll look at a board position, and he’ll actually compare that in parallel to all hundred thousand board positions that he has studied. Then he’ll sort of instantly think to himself, oh yeah, this is just like that board position that Grand Master So-and-So encountered three years ago, when he forgot to protect his trailing pawn. I better remember to do that. He’s making that pattern recognition decision, pairing it to all these hundred thousand boards that he knows about in parallel, and it’s not a logical decision. It’s using something called chaotic computing, where none of the specific neurons being used are, is particularly important. It’s a whole pattern of activity. 

It’s analogous to a hologram, in which none of the specific pixels in the hologram is important. You can scratch a hologram, and when use it, you still get a perfect picture without a scratch. You can actually cut it in half, and you still have a perfect picture, only it’s half the resolution. All of the information is diffusely distributed throughout the whole pattern. Similarly in the brain, none of our neurons is all that important, and, with only a few exceptions, none of the connections are all that important. It’s the whole pattern of activity that’s important, and that is quite different than our computers. If you open up your computer and cut one of the wires, you’re very likely to break it. If one of our wires–one of our internal connections–in our brain dies, or is severed, generally that has absolutely no effect. It’s like eliminating one pixel on a hologram. It’s insignificant.

However, we are beginning to move in that direction with our machines. There’s no reason why we can’t build our machines the same way, and this approach of holographic-like, chaotic computing, to achieve pattern recognition is, in fact, the right way to approach this type of intelligence. That’s really the heart of human intelligence, and it’s going to be a great trend in the future. 

David: How do you define intelligence?

Ray: Intelligence is the ability to solve problems, in an optimal way, using limited resources. One of those limited resources is time. In my last book I talked about the intelligence of evolution, which you might think is extraordinarily intelligent, because of it’s remarkable designs, like human beings, but it took a very long time to do so. Actually, it’s intelligence is just barely above zero–just enough to create some extraordinary designs over huge eons of time. 

But the amount of time that you take to solve a problem is relevant to the level of intelligence. If you suddenly notice some crouched creature on a tree limb to your upper right side, and it’s about to take a leap at you, that presents a challenge to your survival. If you can solve that problem in a few seconds that’s a lot better than taking four or five hours to solve the problem. You probably don’t have four or five hours. As we go through life, we’re constantly challenged with problems, big and little, and the amount of time we take to solve them, as well as the amount of other resources that we use–such as money, computation, all these things that are in limited supply–is, of course, very important.

David: How long do you think it will be before computers exceed human intelligence, and how do you think human life will change after we have computers that are smarter than people?

Ray: There’s two parts to achieving machines that will match human intelligence. One is the hardware requirement, and the other is the software. In my book four years ago, The Age of Spiritual Machines, I said we’d have the requisite hardware capability by 2019 in a personal computer, or at least in $1000 of computation. Then people went around saying, well Ray Kurzweil says that when we have machines where the hardware capacity matches the human brain, we’ll just automatically have human level intelligence,and that’s never been my position. That’s only one part of the equation, although even that part, the hardware part, was controversial four years. I would say there’s been real seat change in attitude about the hardware. 

Now, generally, the mainstream opinion is, oh, of course we’ll have the hardware. That’s very obvious. There’s been enough progress in three-dimensional molecular computing over the last four years, and there’s a great deal of confidence we’ll have three-dimensional molecular circuits before Bor’s Law runs out with regard to two-dimensional circuits. The controversy has now shifted to the software question. Okay, we’ll have these very powerful machines, but will they just be extremely fast calculators, or will they be intelligent? There’s more to human intelligence than just the fact we have a hundred trillion inter-neuronal connections that could compete simultaneously. The way they’re organized, and the whole paradigm of their operation, is very important. 

There’s many different ways in which we might achieve the software intelligence. One’s through our own continued experimentation, and developing, I think, in particular, pattern recognition algorithms, which are getting more and more powerful. But the most important project is the project to understand the human brain itself, and to reverse-engineer its principles of operation. And it’s not hidden from us. Brain-scanning is growing exponentially. It’s doubling in resolution, price, performance, and bandwidth every year. We are getting more and more detailed models of the brain every year. Already there have been detailed mathematical models of a couple dozen regions of the brain, and those have been replicated in software. Those software systems have been administered psycho–acoustic tests–because these are replications of acoustic areas of the brain, parts of the brain that deal with acoustic information–and they performed similarly to the actual brain circuits. 

We’ve shown that it’s feasible to develop these mathematical models. For years some scientists, like, for example, Doug Hofstadter wondered, are we inherently intelligent enough to understand our own intelligence? Maybe our own intelligence is below the threshold needed to understand our intelligence? Is the way our intelligence works is so complicated and profound, that it’s beyond human intelligence to understand it? What we’re finding is that it’s really not at all the case that the brain is, first of all, not one just big information processing organ. That’s like saying the body is one organ. If you want to understand how the body works, you have to break it down–into the heart, the liver, the lungs, and then you can begin to understand each region separately, because they each use different principles of operation. The brain is several hundred different regions, and each one actually works quite differently. It represents information differently. It transforms it differently, and uses a different principle of operation. But we’re finding that we can actually model these quite precisely.

The vast majority we haven’t done this with, but we’re showing that the process is feasible, and we’re getting more and more powerful tools. The tools to do this with are themselves growing exponentially in power. It’s very comparable to where the Genome Project was maybe ten years ago. At that time mainstream scientists were saying, oh it’s going to be hundreds of years before we are able to sequence the whole genome, because, at the speed we’re going, we can only do one ten thousandth of the job in a year. But there also, the bandwidth, the speed, the price performance doubled every year. So most of the project was done in the last couple years of the project. We’ll see a similar process with reverse engineering the brain. So we will understand the brain, and have detailed models of the several hundred regions in the 2020’s. These will provide us with the ideas, the templates of intelligence. We’ll modify them. We’ll re-engineer them. 

One thing we discover when we start to understand how natural systems work, is that very often they’re very inefficient. I mentioned already that our internal, inter-neuronal connections computed 200 calculations per second, using a very very complex, cumbersome electrochemical system. We already have circuitry that can operate many millions of times faster. So as we re-engineer it, we can greatly improve its performance. So I believe we’ll have the hardware and the software by the late 2020’s. I’ve said 2029 as a reasonable date, at which point I think machines will match the subtlety and range of human pattern recognition, and human intelligence in general. And once a machine achieves human levels of intelligence, it will necessarily soar past it, because it’ll be able to combine the subtlety and flexibility of human intelligence with the ways in which machines are inherently superior–being able to share knowledge, operate a lot faster, always perform at peak performance, master billions and trillions of facts accurately. They’ll be able to go out on the Web and read and master essentially all of the human-machine civilization’s knowledge. 

I think the primary impact is going to be, not a set of alien machines that come from over the horizon to compete with us, but we’re going to merge with our machines. There’s not going to be a clear distinction between human and machine. This issue we talked about

Pages: 1 2 3 4 5 6 7 8

Leave a Reply