Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Ray Kurzweil

major conferences today on something called Bio-MEMS (Biological Micro-Electronic Mechanical Systems), which are organized to develop the first generation of little devices that would go inside the blood stream, for a wide-range of diagnostic and therapeutic purposes.

For example, something that actually works today is a little device that’s actually nano-engineered, with 7 seven nanometer pores. It’s a little capsule that lets insulin out, and blocks antibodies. It has actually cured Type 1 diabetes in rats. Since the mechanism of Type 1 diabetes is the same in rats and humans, there’s no reason why this kind of device, once fully perfected, shouldn’t also work in humans. 

Now, this is the first generation. I think it’s important to understand–which, actually, very few observers really do understand fully–that these technologies are growing, not linearly, but exponentially. Not only is the power of these technologies roughly doubling every year, but we’re actually doubling the rate of progress every decade. That’s probably an important point we should come back to. But if you look at the trends in the exponential growth of computation, and the exponential shrinking of technology, you’ll see that we’re shrinking both electronic and mechanical technology at a rate of 5.6 per linear dimension per decade. Look at the exponential growth in communication technologies. 

Then there’s the exponential growth and the knowledge of the human brain. We’re reverse engineering the brain, understanding how it works, and getting more and more detailed information about the brain. We’re doubling the amount of knowledge we have about how the brain works every year. It’s conservative to say that by the late 2020’s we will have the following. We’ll have really completed a very detailed reverse-engineering of the human brain. We’ll have blood cell-size robotic devices that have considerable intelligence, that communicate with each other, and with our biological neurons noninvasively. 

So we could then have the following scenario. We can send millions, or billions, of nanobots noninvasively through the bloodstream by injecting them, or swallowing them, and they’ll make their way into the brain, without surgery, through the capillaries of the brain. The capillaries go to every single spot in the brain, so these nanobots can be widely distributed, and this limitation that I mentioned before, of only being able to put it in one place, will be overcome. 

They could be in billions of places. They could be introduced non-invasively, and they will communicate with each other. They’ll be on a wireless local area network, that could communicate with the internet, which can communicate non-invasively with our biological neurons. All of these capabilities have already been demonstrated on small scales. We can’t build these devices small enough yet to put billions of them in the bloodstream. But that’s a conservative scenario based on what I call the Law of Accelerating returns–which is the exponential shrinking of the size of these technologies, and the ongoing exponential growth in computational capacities, which is the power of these underlying technologies.

We then have a number of scenarios that would follow from that. For example, we could have full immersion virtual reality, where the nanobots can shut down the signals coming from our real senses, and replace them with the signals that your brain would be receiving if you were in the virtual environment. Then your brain would feel like it’s in that virtual environment, and these can be as realistic, detailed and compelling as real reality. 

You could go there by yourself, or with other people and, have any kind of experience with anyone in these virtual environments. Some of these virtual environments can be realistic recreations of Earthly places, like taking a walk with someone on a Mediterranean beach. Some can be fantastic imaginary environments that don’t exist on Earth, and couldn’t exist. They may not follow the laws of physics. Designing new environments, just like designing new game environments today, will be a new art form. You can incorporate all five of our senses in these virtual environments. They can also include the neurological correlates of our emotions. So that’s one application.

Another application, really the most profound one, is to extend human pattern recognition and cognitive ability. Even though we’re constantly re-wiring our brain, and we grow new connections all the time,  it has a fixed architecture.  Re-wiring our brain is part of the basic method we use to learn, and using our brain is the best way to keep it healthy, but we’re limited to a fixed architecture, on the order of a hundred trillion connections. While that might sound like a big number, human bandwidth is pretty limited. 

For one thing, those connections are very very slow. Electrochemical signaling takes place, which is a little bit different than a digital calculation. It’s really a digital-controlled analog transaction, but it’s 200 calculations per second, and that’s about a hundred million times slower than today’s electronic systems, and close to a billion times slower than would feasible with nanotube-based circuitry. A one inch cube of nanotube circuitry would a million times more capable than the 100 trillion connections in the human brain.

So we will be able to expand the human bandwidth by intimately interfacing with these nanobot-based computational systems. We’ll be able to add new connections. Instead of having a hundred trillion connections, you could multiply that. You could run these new virtual connections faster. You could interface with non-biological forms of intelligence, have direct brain-to-brain communication, and download knowledge. That’s one thing the machines can do even today–simply share their knowledge bases–whereas I can’t download my knowledge of French, or my knowledge of the novel War and Peace, to you. I can communicate with you. I can share knowledge using language, which is very very slow, but I can’t just download my pattern of neurotransmitter concentrations and inter-neuronal connections to you, whereas machines can do that. 

These are scenarios that will begin to be feasible in the late 2020’s. I believe that 2029 is a reasonably conservative target date for machines being able to pass the Turing Test, and achieving human levels of intelligence. They’ll then be able to combine those benefits with advantages that machines already have, in terms of speed, memory capacity, knowledge sharing and so on.

David: AI systems can’t make decisions based on probability because they are based upon binary either/or circuitry. Do you think that we need to incorporate a way for computers to access probability, so that they can make decisions in a way that is more similar to humans?

Ray: There’s a valid criticism underlying that criticism, although, as stated, it’s really not accurate. I mean, machines can deal with probabilities very well. They can deal with many different gradations of possibilities other than true or false. Some people have said, well, the brain’s analog, and computers are digital, and that’s why machines will never emulate the human brain. But we know that digital systems can simulate analog systems to any level of precision that we’d like, and, moreover, electronics can be analog. In fact, transistors are basically analog devices, and we had to add thresholding circuitry to them to make them digital, but they actually start out as analog. There have been people like Carver Mead at Cal Tech who have actually pioneered developing analog circuits, specifically to emulate brain circuitry, and actually emulate them more closely to the way the brain actually works.

What I do think we need to do is to move towards a completely different paradigm of intelligence. Rather than analyzing things logically, I think we need to use pattern recognition, and that’s actually my technical field. I’ve been really devoted to pattern recognition since high school–so for about forty years now. Pattern recognition works differently than logical analysis. The very early computers in the 1950’s actually did a good job of being mathematicians, and solving mathematical theorems. Computers are very good at playing games like chess, and doing these kinds of logical calculations. What has been more difficult, although we are making good progress, is pattern recognition tests–like recognizing faces, and recognizing speech sounds–and that is actually the core strength of human intelligence.

Let’s take an example, and look at how a human plays chess compared to how a computer plays chess. This is not to mean that chess is necessarily an exemplar of the depth of human intelligence, but it’s a good example of the contrast in what I’m talking about. A machine can actually calculate billions of move/countermove sequences, and it can consider billions of different board positions every time it moves. Deep Blue, for example, was able to consider 200 million board positions per second. Kasparov was asked, how many can you consider a second? And he said less than one. So how is it that he can hold up to a machine at all, if he considers less than one a second, and the machines can do a hundred million, or two hundred million, a second. It’s because of his depth of pattern recognition. 

He has studied a hundred thousand board positions. That’s actually a real number. We find that human experts, in a field of expertise, actually master a

Pages: 1 2 3 4 5 6 7 8

Leave a Reply