» DESIGNING OUR OWN DEMISE: An interview with robotics expert Hans Moravec

interview originally conducted for Britannica.com, by Brian Awehali

Hans Moravec is a leader in robotics research, founder of the robotics program at Carnegie Mellon University, and the author of several books, including Mind Children: The Future of Robot and Human Intelligence and Robot: Mere Machine to Transcendent Mind.

Moravec is in firm belief that machines will acquire human levels of intelligence by the year 2040, and that by the middle part of this century, they will be our intellectual superiors.

Generally speaking, what is a robot?

Well, there are some industry definitions that are descriptive of existing things but really, for those of us who are less passionate, it’s a machine that does what living things do.

And what was the first real robot?

If you were born before the 20th century, you’d probably want to point to clockwork mechanisms and even industrial machinery. At least those things were animate, which is a very big distinction from things that just sit there.

So the progression from simple tools to complex machines?

To a self-powered machinery—whether it’s powered by springs or water or steam. But in the 20th century something new was added, namely, a sensory detector—sensors, basically, which allowed the machine to respond to things going on outside of it in a non-trivial way. I guess with mechanical machinery you have levers and things that could sense large forces. But once there was electronics, you could have things that could respond to light or to sound or to pressure.

So the ability for these machines to take in data on the environment?

Right. So I think it’s perfectly fair to call electronically actuated industrial machinery “robots.”

And what was the original meaning of the word “robot”?

Well, “very hard work,” or basically “work,” sort of bordering on slave labor.

To switch gears a bit, why is it so difficult for computer programmers to mimic common sense, or what could be called true artificial intelligence?

Well, I know there are a lot of theories. One glaring reason is simply a matter of scale—basically, to make something like a human brain you require a million times more computing [power] than we have today; at least, for those of us who don’t work at the national labs.

At a hundred trillion calculations per second I’d say that’s almost enough computer power. And if people were doing artificial intelligence on those machines, then the criticism “it’s not working” would be valid, but actually people are not doing artificial intelligence. They’re doing physical simulation.

And in order to do artificial intelligence in robotics, I think, you need a lot of trial and error.

So I think we just have to be a little more patient and wait for the computers to come along. But meanwhile we now have computers that are powerful enough to do the job of at least small nervous systems. So my [Macintosh] G4 can do about as much computer power on my retinal scale as a guppy. So we should be able to get a sort of guppy-like performance out of our robots right now.

So when do you think we might see the first robot possessing artificial intelligence?

Well, computers are doubling in power approximately every year now, so [I think] the answer is about 30 years.

Won’t that require an exhaustive amount of human programming?

Indeed, it is a major effort to create [such a] database. The most comparable thing to date has been the building of the CYC common sense reasoning system.

And what was that?

One of the meanings is the “cyc” in sounding the word “encyclopedia.” You know that word, don’t you? [this interview was conducted for the online magazine of Encyclopaedia Britannica]

But also, you know, it sounds like psychology. And this was a project begun, I think now, 15 years ago. It was to be a 10-year project. It was started by Doug Lenat, who was a Stanford graduate, who earlier in his career had written some mathematical reasoning programs.

Later, he wrote a more abstract thing that taught about meta-concepts called HEURISCO. It figured out heuristics for reasoning programs. Then there was [the CYC] project. He was working at the MCC, a consortium of companies for computer research, including some artificial intelligence research.

That’s interesting. A particularly curious point in your book is when you talk about robots being imbued with human values and human feelings. Is this where that comes into play? Because in order for this to work effectively, we have to find a way to give them the ability to understand psychological models, human values, human feelings.

That’s right. To a first approximation, this isn’t that hard. There are in fact…all of these things that I’m talking about have all been done to various extents by generations of students, you know, in our…robotics. So there are toy versions of all of these things that I’ve been talking about. Only now, by the time of such robots they have to be done really well, and much more completely, and on a much larger scale. But in fact, you know, having a logical system that reasons about psychological variables isn’t as hard.

So there are certain cues that you read and then you incorporate them in a model—[to see] if it’s going to hit or if it’s going to give me a banana. And as you make the details richer, you get more nuances and subtleties, and that aspect of the simulation will have to be tuned up just like the physical aspect and again, there are at least toy versions of systems that interact psychologically now.

Actually, there was a system built here—the program was called “Oz.” The creatures living in Oz were called “woggles.” They were rendered on a screen as sort of egg-shaped things with big eyes, and they could interact with each other in the three-dimensional world, and there would, in fact, be attractions and dislikes, and those would register in the way the eyes moved—the way they looked, and you know—the way the pupils changed. And then the woggles would respond to each other. So if this one woggle gave another woggle a dirty look, then there would be variables corresponding to fear and anger and the like, and all of those things which would adjust according to some formulas. And then, depending on the state of the woggle, it would then react and send out vibes. So you could have two that would suddenly hate each other or like each other.

So CYC was going to provide common sense for expert systems. A medical diagnosis system, you know, given the symptoms of a rusty bicycle prescribed an antibiotic. And that’s because it doesn’t know what a bicycle is or a person or rust or anything. It doesn’t know what it’s talking about. Well, what if it had facts about that…knew that bicycles don’t get, you know, skin infections and humans don’t rust, and antibiotics are only appropriate for living things and so on. Then it could just add that into its reasoning. And, in fact, would eliminate, you know, a solution like the simple expert system can come up with. So the way you start to do that, according to Lenat, is you take statements in an encyclopedia and then you break them down.

So, you know, Napoleon was imprisoned on Elba. Well, what’s a Napoleon?

Napoleon is a man.

What’s a man?

A man, you know, is a living thing, and a man has two arms and a head.

Eventually you can work it down to very primitive concepts. Like when you put an object on top of another object, then the first object is underneath the second object. And they figured it would be several million assertions—several million logical sentences when they were done. And they had a team of half a dozen knowledge engineers working day after day.

Is that work still happening?

Well, sort of…it wasn’t as successful as they hoped it would be. But it was still a decent first try.

What was unsuccessful about it or less successful than they would have liked?

Well, the idea was that eventually there would be enough knowledge in the system that the thing could interpret statements without having humans package [things] for it. So it would basically be able to read books, you know, and extend its knowledge, and it certainly never reached that state. And even its basic understanding of anything is so limited that you can play with versions of the system and it can still make really stupid mistakes unless you follow the script.

You’ve gotten some strong reactions to the book and especially to the more futuristic aspects of it. And the part that scares a lot of people is the idea of robots becoming able to think for themselves and outsmart us, and be physically more adaptable than us. In your book, you see them first automating human tasks, and then sort of a period of prosperity where many things are mechanized and perhaps people are more free to pursue their own pleasure or self-actualization, to put it in psychological terms. Then, you basically predict that down the road what this would lead to is, we would give way as a species to robots and that they would essentially be our progeny. Not in the way…similar, but different, in that they would be children of our minds.

Yes, you summarized it quite well.

Okay. And it seems like while reading your book one has an immediate, visceral reaction like: “Oh, well…no! Why would we want that?

But you seem to have a very different take in terms of it being a natural or even desirable progression.

Well, I was thinking about it [for a] really long [time], and the earlier stage in the evolution of that position was in my previous book where I had…

…that was in Mind Children?

Yeah. I had that outcome, but I also had a whole chapter sort of entitled “Grandfather Clause” devoted to the idea…well, we can’t beat them, but we can probably join them, so there’s the proposal that we can augment ourselves to become as smart as they are, to keep up with how smart they’re becoming.

You call that “Xs,” [ex-humans] right, in this book?

Well, here they’re…the Xs, frankly, are mostly just robots. I mean, there may be a few human beings that sort of tag along as junior partners.

But in the first book it was sort of…it had a bigger role for converted people. But now…I really think that that’s not going to do much and that’s not going to work. It’s like taking an oxcart, and replacing the wooden wheels with rubber tires, and the ox with a motor, and the tiller with a steering wheel—and building a car. You’re still going to have a pretty lousy car because of the legacy of the oxcart that you kind of carry with you when you make these incremental changes. You’re much better off going to a drawing board and designing a car that’s a car from the ground up. Even though you want to retain the most important parts of the oxcart, like the ability to move along flat ground, but you only retain the parts that are really worth retaining. So, I think it’s like that.

You see it as a basic evolutionary transition.

Our offspring…we should design them like any product—to be as good as possible and not try to retain things from the past simply for reasons of nostalgia.

About these ads

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s