Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Bruce Sterling

they’ll dominate politically and simultaneously implode. In which case they’re just an inherently unstable way to try and run human affairs. And I suspect that’s the case. The instability is in Schumpeterian “creative destruction” and it might even be construed as a virtue.

David:  Speaking of things being unstable, in your book Tomorrow Now, you talk about how a permanent state of disequilibrium can be a very creative space. Can you explain why you think this is so?

Bruce: Well, it’s not like we’re getting a choice. (laughter) A human body is in a permanent state of disequilibrium. You’re conceived. You grow. You’re born. You age. You move through time, and you perish. And you never get to stop in the middle, just because you’re happy right where you are, right? I mean, time flows through your being, and you don’t get to opt out of that process. Human life is a permanent disequilibrium. And what you have to do is, get up in the morning, and keep putting one foot in front of another. (laughter) I think that’s just a fact of life about the human condition, and the social condition as well. In writing about permanent disequilibrium, what I was trying to do was get people used to the idea, so that they’d be willing to confront the truth there, and not feel some kind of existential horror about it. There needs to be some way for us to celebrate the truth without flinching, and deal with it as effectively as we can.

Whenever somebody comes up with some weird, static notion, like this poor Fukuyama guy who wrote his book about “the end of history” way back in 1992–well, history immediately continues, which makes him look rather foolish. It’s very disillusioning to people when these grand, static formulations break down. You’d be a lot better off by realizing that history doesn’t end–authors end, okay? History flows through people and renders us mortal.  Francis Fukuyama would have been a lot better off writing a magisterial book called “The End of Myself.”  That premise would have been a lot clearer. But unfortunately, he got kind of carried away with his methods of analysis there, and he didn’t realize that the clock continues to tick, even though he really thinks he’s got the grand scheme figured out. We’re never going to get history figured out, although people always aspire to do that in some way.

Some particular zealots, having figured out our sublime and holy destinies, immediately want to go out and violently convince everybody else. That really causes a lot of unnecessary grief. What we really need is something more along the lines of a Soros “Open Society”–a society and a political arrangement, and an economic arrangement, an educative arrangement, and so forth, that just takes it for granted that our knowledge is inherently imperfect and we’re going to keep making new mistakes. We’re never going to get perfect, permanent answers. Not utopias. Not dystopias. Not the greatest, grandest ways to run things. Not so-called solutions to so-called problems. But decent, liveable spaces in which we’re able to make fresh mistakes. That’s the best we should ask for, and that’s the best we should be willing to ask for. And that is permanent disequilibrium.

David:  When I interviewed Ray Kurzweil for this book he told me that he thought that it would be around 30 years before we had computers that exceed human intelligence. How long do you think it will be before computers exceed human intelligence, and how do you think the world will change when they do?

Bruce: I think that whole idea is absurd. This is like asking, when do you think a 747 will lay better eggs than owl does?  Computation has very little to do with what human brains do. We’re abusing the term “intelligence” as a kind of smear-over, conflating term to try to unite cognition and computation. I don’t believe those are even very closely allied things. I suspect that they’re profoundly different phenomena.

David:  What do you think about the notion of artificial intelligence in general, and about the possibility of reverse engineering the human brain?
Bruce: I think those are two different fields. I think you’re very likely to see very powerful machines that are capable of very sophisticated forms of processing, but their activities are going to look less and less like “intelligence.” I mean, there’s just no good technical reason for these machines to behave in a way that resembles human cognition. That’s like asking, why won’t this jet flap its wings? Well, a jet doesn’t need to flap wings. There are no really big ornithopters. There’s no good engineering reason to make an ornithopter. There are no birds that are as big as 747s because flexing wings can’t support that much weight in the air. So you can call what a jet does “flying,” and you can call what bird does “flying,” but, in point of fact, although they’re bound by some similar laws of aerodynamics, they don’t scale, one to another.

The things computers can do that are interesting have very little to do with Turing tests, relating to people, talking, or understanding natural language. I’m very aware that there are a lot of people in the Kurzweil school who really want to scoop out their skull cavity and install a lot of silicon in there. And they believe this is really some kind of neat hack. But it’s absurd. It’s like sawing off your legs and grafting on roller skates. I think they’re operating under a basic misapprehension. And I don’t worry much about the issue–I think time is already telling, there. I never argue with hard AI guys because they’re more set in their ways than the Jesuits. It’s theological, it’s blue-sky handwaving, it’s not practical. In the short term, there’s very little that a Kurzweil-style AI machine could do that anybody would be willing to pay for or finance.  And in the long term, we really will be reverse-engineering the human brain rather than trying to mimic it in silicon, and that’s a whole different story.

David:  What do you think some of the most important technological advancements of the 21st Century will be, and how do you think the world will change as a result?

Bruce: Well, I’m kind of a Ubi-Comp groupie. I don’t think the trend is about super intelligence. I think it’s about distributed sensors. Like, for instance, I don’t want a “smart” refrigerator. I don’t want to converse with the Turing refrigerator. I don’t want to have it sell me anything. But I would like it to be very efficient, and I would like it to know what’s inside it. That would be handy. I’d like to be able to poll my refrigerator by cellphone from downtown: is there milk? Yes. How much? So much. How old is it? It’s smelling pretty funny. Thank you, refrigerator. Now that should be like a set of text messages.
So I don’t really need Jeeves the butler to be in my refrigerator. I don’t need human interaction with mechanical devices. That’s like putting a wooden horse’s head on the front of an automobile. I think the most important technologies of the 21st Century are going to be whatever technologies allow us to keep nine and half billion people on the planet, without drowning in our own spew–whatever allows us to feed and educate people, and keep the plagues at bay, while we’re doubling our numbers, and causing a really serious biosphere problem. Those are the big issues, the big opportunities.

David:  What do you think is the biggest threat to the human species?
Bruce: Greenhouse effect.

David:  Are you optimistic about the future, and do you think that the human race is going to survive the next hundred years?

Bruce: People always ask me that, but I think it’s a bad question to ask. Are you optimistic about the future? Or are you pessimistic about the future? Nobody would ask that of somebody who was studying the 18th Century instead of the 21st. Like, are you optimistic about the 18th Century? Or pessimistic about it? I try not to allow a set of emotional attitudes to put blinkers on me. I mean, if I were optimistic about the Eighteenth Century I could go and write a history of the 18th Century that said, in the 18th Century things were great! And if I were a pessimist, I could say, the 18th Century was a living hell! But, in point of fact, the 18th century was both at once, depending on circumstances and point of view. And every other century has always been both at once. So I’m inclined to think that most future centuries will also be both at once, and that questions like, “are you an optimist? Or are you a pessimist?” are just an invitation to ignore a lot of the evidence.

David:  Do you think that the human species will survive the next hundred years, and if so, how do you envision the future evolution

Pages: 1 2 3 4 5 6

Leave a Reply