A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Clifford Pickover

a pure accident: “It is evident that the evolution of cognition is neither the result of an evolutionary trend nor an event of even the lowest calculable probability, but rather the result of a series of highly specific evolutionary events whose ultimate cause is traceable to selection of unrelated factors such as locomotion and diet.” If human intelligence is an evolutionary accident, and mathematical, linguistic, artistic and technological abilities a very improbable bonus, then there is little reason to expect that life on other worlds will ever develop intelligence as far as we have. 

Both intelligence and mechanical dexterity appear to be necessary to make radio transmitting devices for communication between the stars. How likely is it that we will find a race having both traits? Very few Earth organisms have much of either. As evolutionary biologist Jared Diamond has suggested in Natural History, those that have acquired a little of one (smart dolphins, dexterous spiders) have acquired none of the other, and the only species to acquire a little of both (chimpanzees) has been rather unsuccessful. The most successful creatures on Earth are the dumb and clumsy rats and beetles, which both found better routes to their current dominance. If we do receive a message from the stars, it will undermine much of the current thinking about evolutionary mechanisms. 

You asked why brains are increasing in size and complexity over time. The obvious answer is that size and complexity may require time, just like other natural phenomena. For example, huge termite mounds, ant colonies, river branching patterns, and deep canyons exist, but time is required for their development. But I don’t believe that huge, complex brains are necessarily inevitable on a planet, even given a few billion years time of evolution.

David: Do you think that consciousness is created by the brain, or do you think that it can exist in life forms without a nervous system, like a plant?    

Clifford: The question is difficult until you define the word ‘consciousness’. If it is simply  goal-seeking or avoidance behavior, many primitive animals and plants display this. Also, what do you mean by life forms? Consider hive minds on Earth which we can explain using a termite analogy. Even though an individual component of the hive mind is limited–as a termite has limited capacity–the entire collection of components display emergent behavior and produces intelligent solutions. Termites create huge, intricate mounds–taller than the Empire State Building when compared to their own height. These termites control the temperature of the mound by altering its tunnel structure. Thus, the component termites come together to create a warm-blooded super-organism. Is the hive conscious even if its components are not?  

David: What are your thoughts on artificial intelligence, and do you think a computer can ever become conscious?  

Clifford: Some say that computers can never have real thoughts or mental states of their own. The computers can merely simulate thought and intelligence. If such a machine passes the famous Turing Test– a test to see if  you can have a conversation with the machine and find it indistinguishable from a human–this only proves that it is good at simulating a thinking entity. Holders of this position also sometimes suggest that only organic things can be conscious. If you believe that only flesh and blood can support consciousness, then it would be very difficult to create conscious machines. However, there’s no reason to exclude the possibility of non-organic sentient beings. Some day, we’ll all have a Rubik’s-cube-sized computer that can carry on a conversation with us in a way that is indistinguishable from a human. I call these smart entities Turing-beings or Turbings. If our thoughts and consciousness do not depend on the actual substances in our brains but rather on the structures, patterns, and relationships between parts, then Turbings could think. If you could make a copy of your brain with the same structure but using different materials, the copy would think it was you.

At a more liberal end of the spectrum are those researchers that say passing a Turing test suffices for being intelligent. According to this way of thought, a machine or being able to respond to questions in the sophisticated ways demanded by the Turing test has all the necessary properties to be labeled intelligent. If a rock could discuss quantum mechanics in a seemingly intelligent fashion with you, the rock would be intelligent. If the thing behaves intelligently, it is intelligent. When a human no longer behaves intelligently (e.g. through brain damage or death), then we say there is no longer any mind in the body, and the being has no intelligence.

We may like to digress and consider what it means for an intelligent machine, being, or Turbing to actually ‘know’ something. There are many kinds of knowledge the being could have.  This makes discussions of thinking things a challenge. For example, knowledge may be factual or propositional: a being may know that the Peloponnesian War was fought by two leading city-states in ancient Greece, Athens and Sparta. Another category of knowledge is procedural, knowing how to accomplish a task such as playing chess, cooking a cake, making love, performing a Kung Fu block, shooting an arrow, or creating primitive life in a test tube. However, for us at least, reading about shooting an arrow is not the same as actually being able to shoot an arrow. This second type of procedural knowing implies actually being able to perform the act. (One might wonder what it actually means for a machine to have ‘knowledge’ of sex and other physical acts.)  Yet another kind of knowledge deals with direct experience.  This is the kind of knowledge referred to when someone says, “I know love” or “I know fear”.

Let’s sum up.  On one side of the discussion, human-like interaction is quite important for any machine that we would wish to say has human-like intelligence. A smart machine is less interesting if its intelligence lies trapped in an unresponsive program, sequestered in a kind of isolated limbo. As computer companies begin to make Turbings, the manufacturers will probably agree that intelligence is associated with what we call ‘knowledge’ of various subjects, the ability to make abstractions, and the ability to convey such information to others. As we provide our computers with increasingly advanced sensory peripherals and larger databases, it is likely we will gradually come to think of these entities as intelligent.

Terence McKenna thought that ‘alien life’ would first appear to us in the form of computer consciousness.  He said in the May, 2000 issue of Wired:  “Part of the myth of the alien is that you have to have a landing site.  Well, I can imagine a landing site that’s a Web site. If you build a Web site and then say to the world, ‘Put your strangest stuff here, your best animation, your craziest graphics, your most impressive AI software,’ very quickly something would arise that would be autonomous enough to probably stand your hair on end. You won’t be able to tell whether you’ve got code, machine intelligence, or the real thing.”

According to Chworktap, the beautiful female protein robot in Kilgore Trout’s, Venus on the Half-Shell, “Anything that has a brain complex enough to use language in a witty or creative manner has to have self-consciousness and free-will.” I don’t know if I agree entirely with Chworktap, but I see no reason that consciousness would be limited to organic life forms.   Certainly within this century, some computers will respond in such a way that anyone interacting them will consider them to be conscious. The entities will exhibit emotions. Over time, we will merge these creatures. We will become one. We will download our thoughts and memories to these devices. Our organs may fail and turn to dust, but we will survive.

David: When I interviewed Ray Kurzweil for this book he told me that he thought that it would be around 30 years before we had computers that exceed human intelligence. How long do you think it will be before computers exceed human intelligence–if, indeed, you think that such a thing is possible–and how do you think the world will change if and when they do?    

Clifford: Humans, or computer/human hybrids, will surpass humans in every area, from art to creativity, to intelligence. I do not know when this will happen, but I think it very likely that it will happen in this century, in the same way that we will become immortal in this century because we will fully understand the biological basis of aging. Of course, computers already exceed human ‘intelligence’ when it comes to winning chess or solving certain mathematical problems. I see no reason why this basic skill won’t gradually metastasize into other areas like painting, music, and literature.   

Pages: 1 2 3 4 5 6 7 8 9

Leave a Reply