Program Notes

Program moderator: Timothy Leary

Today’s podcast features a recording from the Timothy Leary archive. According to the label on the file, this is a recording of a television show that originated in Toronto, Canada. It was called Enterprise, or something like that, and this program took place sometime in 1983. The guest host that night was none other than Dr. Timothy Leary, and his topic for discussion that evening was Artificial Intelligence. … For me, one of the highlights was to hear a famous MIT professor predict that home computers would never catch on!

MENTIONED IN THIS PODCAST
Fall & Winter (the movie)
 
The Coming Technological Singularity:
How to Survive in the Post-Human Era

Vernor Vinge
Department of Mathematical Sciences
San Diego State University
(c) 1993 by Vernor Vinge

BB’s Bungalow Number 44
Remembering the “Bear”

(Note: There is no direct link to the above podcast.
You will have to search for in after clicking the link above.]

Previous Episode

263 - Terence McKenna’s Last Interview Part 2

Next Episode

265 - Lawyers Lie

Similar Episodes

Transcript

00:00:00

Greetings from cyberdelic space, this is Lorenzo and I’m your host here in the psychedelic

00:00:23

salon.

00:00:23

This is Lorenzo, and I’m your host here in the Psychedelic Salon.

00:00:32

And I’d like to begin today’s podcast by thanking the kind souls who either bought a copy of my Pay What You Can novel,

00:00:50

The Genesis Generation, or who made a donation directly to the salon. Brian E., Marvin B., Robert C., Abdul U., Chris T., Don L., and James C.

00:00:54

Thank you all ever so much. I really appreciate your support.

00:00:59

And now, for a little change of pace in the series here,

00:01:03

we’ve been doing a whole series of Terrence McKenna Talks in the last few podcasts,

00:01:08

and so today I’m dipping back into the audio archive of Dr. Timothy Leary,

00:01:13

which you and I have access to through the good graces of Dennis Berry,

00:01:17

who is the wonderful woman that, at great personal sacrifice,

00:01:20

managed to keep the massive Leary archive intact,

00:01:24

and to Bruce Dahmer, who introduced me to Dennis and who has also had a lot of

00:01:25

interaction with this archive.

00:01:27

And in the months ahead, I’ll have some more to say about this archive itself, but right

00:01:31

now I want to get on with today’s talk.

00:01:34

What I’m about to play for you is a recording that was in the Horowitz file of the Leary

00:01:40

Audio Recordings, and according to its label, this is a recording of a television

00:01:45

program from Toronto, Canada that was called Enterprise, or something like that.

00:01:51

It took place sometime in 1983.

00:01:55

The guest host that night was none other than Dr. Timothy Leary, and his topic for discussion

00:02:00

that evening was artificial intelligence.

00:02:03

Now, I’ll have a little more to say about my own thoughts on that subject

00:02:07

after we hear what Timothy’s guests have to say,

00:02:10

but I do want to be sure that as you listen to this panel discussion,

00:02:14

you keep in mind the fact that it took place in 1983,

00:02:18

which was ten years before Werner Wenge wrote the paper

00:02:22

in which he made famous the phrase technological

00:02:25

singularity. So while you won’t be hearing any speculations about that idea in this recording,

00:02:31

it’s also something that I want to touch on from my own point of view after we hear what the

00:02:35

experts were saying about the concept of an AI back in 1983. Now the only edits I made to the

00:02:42

original recording is that I did remove the commercials that they cut away to on several occasions

00:02:47

And the recording begins just as you hear it with no formal introduction of the first guest

00:02:53

And so I don’t forget it, just in case you happen to be somebody who seriously questions the value of a college education

00:03:00

There’s some really good ammunition here for your position, and I should say my

00:03:07

position as well. You see, my own reason for doubting the value of a college education is that

00:03:13

my undergraduate degree was in electrical engineering from the University of Notre Dame,

00:03:18

and that was back in 1964. And the professor who taught what was then the brand new field of solid state transistors

00:03:26

began the semester by saying that we had to learn about solid state devices in order to graduate

00:03:31

but we could safely forget all about them after graduation because there was just no way they

00:03:37

would ever replace vacuum tubes which of course were his specialty and so I’ve never thought very

00:03:43

highly of my formal engineering education,

00:03:45

at least when compared with what I learned on the job. But I chalk that up to the fact that,

00:03:50

hey, Notre Dame was never an engineering powerhouse, at least when compared to the

00:03:55

West Coast schools, and of course when compared to the biggest of them all,

00:03:59

Massachusetts Institute of Technology, MIT. So it really cracked me up the first time I heard this recording,

00:04:07

when a very famous professor from MIT predicts that

00:04:11

home computers would suffer the same fate as the home movie camera

00:04:15

and wind up in a drawer, unused.

00:04:18

Granted, this was in 1983, about ten years before the web,

00:04:22

but even I knew better than that back then,

00:04:24

and had already been manufacturing and selling personal computers

00:04:27

for over three years by then.

00:04:30

So now I’m thinking that maybe it doesn’t matter where you go to school

00:04:34

because at the pace the world is moving in these days,

00:04:37

the university model for educating people may be seriously outdated.

00:04:42

Not only that, but here in the States,

00:04:44

a college education is also the beginning of a lifetime of indebtedness.

00:04:49

But I digress.

00:04:51

So I’ll just shut up and turn on this recording of Dr. Timothy Leary hosting a 1983 television program in Toronto, Canada.

00:05:03

AI might be a rehearsal for E.T.

00:05:06

There it is in ten words or less.

00:05:08

Well, that’s from AI to E.T.

00:05:12

Give us a little more specifics.

00:05:13

You’re kind of here to set the program up for our viewers

00:05:16

and lay the broad framework, so keep it going.

00:05:20

Before I get into all those ideas of it being humanizing

00:05:24

and whether or not it’s intelligence,

00:05:25

which are pretty well opinions that will be expressed on the show,

00:05:28

let me give you an idea of just what the computer can do.

00:05:31

And basically, computers can only do two things.

00:05:33

They can calculate, which is add and subtract or do mathematics, and they can sort.

00:05:38

You can give them a whole store of information, give them your file cabinet,

00:05:41

and they will very quickly go through that information and perhaps find specific facts or give you associations or make relationships that perhaps

00:05:48

you didn’t see before. In terms of the calculating ability of the computer, that’s the oldest

00:05:53

function, the oldest kind of computer that’s been around. The Chinese have had it for about 2,000

00:05:57

years or more with the beads on a string, the abacus, where they can calculate. And I was in

00:06:03

China just earlier this year, and they still use them.

00:06:05

When you go into a store, there’s no cash register.

00:06:07

It’s an abacus, and they’re sitting there flicking away these things.

00:06:09

And it’s still a very good thing.

00:06:10

Here in North America, we have the pocket calculator,

00:06:14

but it’s doing the same thing.

00:06:15

All it is is adding.

00:06:16

I don’t think of this as a smart machine.

00:06:19

However, with the adding and subtracting ability,

00:06:22

the mathematical ability of a machine like a computer,

00:06:26

we can make mathematical models of various things,

00:06:29

and then we can tell that to the computer and have the computer do things for us.

00:06:34

We can program it.

00:06:35

And when you give a computer a very rigid set of instructions,

00:06:39

first you do this, then do that, then do this,

00:06:41

you will get from here to there, whether it be a robot,

00:06:44

if you want to go to another planet, for example.

00:06:46

Then that’s called an algorithm, and that’s just a set of rules.

00:06:49

And some people think that this is smart, because you might see a robot going through a series of events,

00:06:55

and there’s no human around, and it’s just sort of sniffing around,

00:06:58

or maybe it’s assembling a car, or it’s doing some very specific task, and it seems to be smart.

00:07:04

When really, the intelligence there is in the foresight of the programmer,

00:07:08

the person who figured out what are all the things that we need to do to get that task done.

00:07:13

Now, to give you an example of that, I have a film that is a computer-generated film.

00:07:18

It’s a mathematical model that shows a spacecraft going by the planet Saturn.

00:07:22

This actually happened before the spacecraft went there. The spacecraft was called Pioneer 11. They wanted to know what’s the spacecraft going by the planet Saturn. This actually happened before the spacecraft went there.

00:07:25

The spacecraft was called Pioneer 11.

00:07:27

They wanted to know what’s the spacecraft going to see.

00:07:29

Well, we know how far away Saturn is.

00:07:31

We know how big it is.

00:07:32

We know what the gravitational perturbations

00:07:36

of the spacecraft are.

00:07:37

So we put all of those calculations into the computer,

00:07:39

and then the computer constructed the model for us.

00:07:42

So every frame of this film that you’re seeing

00:07:44

as the spacecraft goes by the planet

00:07:47

was simply a mathematical calculation.

00:07:50

The spacecraft itself was a robot.

00:07:52

It’s so far away from the Earth that we can’t really talk to it.

00:07:55

It takes about an hour and a half to get a message to Saturn and back.

00:07:58

So the spacecraft had to be able to do things like look down on the underside of the rings

00:08:03

and examine the lightning

00:08:05

that was happening down on Saturn and then point its camera. Look for a parking place. Yeah, find out if

00:08:09

there’s anything there. It was an alien environment that’s too far away and too dangerous for humans

00:08:13

to go. And it knew where to point its cameras at various things. Well, there’s an example of a robot

00:08:19

doing something following a very linear sequence of events. Well, I don’t really think of that

00:08:25

as artificial intelligence, and a lot of people do. There is, however, another aspect of computing,

00:08:32

which is this sorting aspect. And this gets closer to the human process of thinking.

00:08:37

Well, that’s like a file clerk.

00:08:39

Sort of. Or if you have a problem, if I ask you a problem, then you will answer my question according to your experience, according to what you know.

00:08:48

I sort through my own memory bank and come out with…

00:08:50

That’s right. And you might make associations.

00:08:53

If you were a medical doctor who perhaps studied lung disease, then I would say, well, look, I’ve got this funny cough, and what’s that mean?

00:09:02

And then you’d ask me a series of questions.

00:09:04

Well, now you’re defining what they call expert systems expert or smart program and and this kind of

00:09:10

thing is where the computer is given a memory that is a series of events the gathering of an

00:09:15

expertise of a number of different fields or maybe one field and then when the when the computer is

00:09:21

asked a question it searches that memory and looks for the right association.

00:09:25

Now, you’ve given us two examples of computers,

00:09:28

the calculator and the expert memory.

00:09:31

But isn’t there a third

00:09:32

and much more important dimension

00:09:34

of machines or entities

00:09:37

that can think, reason, create,

00:09:39

have senses of humor, perhaps flirt?

00:09:43

Well, yes, there is.

00:09:44

God knows what? Yes, there is. God knows what.

00:09:45

Yes, there is.

00:09:46

What would you call them?

00:09:48

I’d still call them tools.

00:09:49

I’d still just call them machines that have been carefully programmed.

00:09:52

There is the ability to make decisions, yes.

00:09:56

And there’s the, again, it’s sort of part of the sorting aspect,

00:10:02

that if I have a problem and it’s a very complex problem,

00:10:07

then the computer can sort of say,

00:10:09

well, this problem sort of relates to something I know over here,

00:10:11

and it sort of relates to something over here,

00:10:13

so I’m going to put those two together and say that’s your answer.

00:10:15

Now, fortunately, we don’t give the computer the final say.

00:10:19

The computer, in terms of something like medical diagnostics,

00:10:22

where we’ll say, associate these symptoms and give me the disease.

00:10:26

The doctor will then say, well, I agree with that or I don’t agree with it,

00:10:30

and then he’s using it as a…

00:10:31

We’re not ready for a war game scenario where the human is taken out of the loop.

00:10:36

No, the human is still a very integral part of it.

00:10:39

There are two other aspects of artificial intelligence that are within this,

00:10:43

that are trying to make the computer easier to relate to.

00:10:46

You see, the human is one of the slower links in this whole chain.

00:10:50

Computers can think very fast.

00:10:51

Well, I hate to use that word, think.

00:10:54

I’ve heard a lot about friendly computers and unfriendly,

00:10:57

but we also have the concept of friendly humans and unfriendly, too.

00:11:01

Yes, yes, we do.

00:11:02

And in order to make computers more user-friendly,

00:11:05

what we’re trying to do is bridge that gap

00:11:07

so that you can talk to it more easily.

00:11:09

And there’s a whole area whereby

00:11:10

the computer has its own language

00:11:12

because if it has an incredibly large memory

00:11:15

or if there are several computers working together

00:11:18

to try to solve a problem,

00:11:19

they need time to get through those programs.

00:11:23

And so there are shortcut languages

00:11:24

that are being developed

00:11:25

within the computers themselves.

00:11:27

And these languages are hard for the humans to translate and to work on.

00:11:31

So what they’re trying to do is to develop a language

00:11:33

whereby the computer can understand English.

00:11:35

So you can say, hi, I got a problem, what is it?

00:11:39

And say it in English and the computer can understand it.

00:11:41

Or Spanish or Japanese.

00:11:43

Or different languages.

00:11:44

Or street hip. Right, or even. Or different languages. Or street hip.

00:11:46

Or even respond to voice control.

00:11:48

The other is vision.

00:11:49

We’d like to give computers vision

00:11:51

so that they can see where they’re going

00:11:52

and be able to recognize patterns and pick things up.

00:11:54

Canada’s involved in that, by the way,

00:11:56

with the space arm, the shuttle space arm.

00:11:59

They’re going to try to give it eyes.

00:12:00

Now that we have an arm, we’re going to give it eyes.

00:12:02

If all of these things come together,

00:12:04

the fifth generation, which is sort of the embodiment of all of

00:12:08

these things of decision-making, supposedly reason, the ability to select various problems

00:12:15

and see and move around, we get it all into a robotic, then we’re dealing with a machine

00:12:19

that can apparently think and move around and relate to us. And whether or not it is

00:12:24

actually thinking

00:12:25

at this moment i’m a thinking machine doing my best to relate uh you got my brain spinning in

00:12:31

the best uh sense joining dr leary is wendy lennon wendy i’m glad you’re here in uh reading your work

00:12:41

which i find by the way inspiring inspiring and intelligence-increasing,

00:12:46

the word natural language comes up a lot.

00:12:50

Could you tell us a little bit about that?

00:12:52

Well, we use the term natural language to differentiate it from computer programming languages.

00:12:58

When we talk about natural language processing by computer,

00:13:00

we’re talking about getting computers to understand English, French, whatever language

00:13:05

it is that you normally speak. So we’re trying to increase the ease with which people can

00:13:11

use computers, and we’re incidentally trying to learn a lot about how people use natural

00:13:16

language in order to achieve this. Now, it might not be at all obvious what’s difficult

00:13:22

about getting computers to do natural language.

00:13:25

We traditionally think of computers being able to compute.

00:13:30

They perform arithmetic operations.

00:13:31

And do great chess problems and solve great mathematical equations and tough stuff like

00:13:36

that.

00:13:37

That’s right.

00:13:38

We associate certain tasks with computers and natural language does not seem to be one

00:13:41

of them.

00:13:42

As we try to get computers to understand natural language,

00:13:45

we understand a lot more about what’s difficult in that test.

00:13:49

For example, when I speak to you and you hear my sentences,

00:13:53

you make a lot of assumptions that could in fact be wrong.

00:13:56

We call these inferences.

00:13:57

For example, if I’m talking to you about a mutual friend, John,

00:14:00

and I mention to you,

00:14:02

did you hear John almost got hit by a car last night?

00:14:08

No. friend John and I mentioned to you did you hear John almost got hit by a car last night it’s not at all clear

00:14:10

exactly what

00:14:12

must have happened but you make some inferences

00:14:14

that John must have been outside

00:14:15

John must have been engaged in some sort

00:14:18

of an activity and

00:14:19

this has implications

00:14:21

this has consequences we can have

00:14:23

expectations about this interfering with something that John was doing,

00:14:27

and we’re naturally concerned.

00:14:29

We think to ask, was he hurt? What happened?

00:14:32

Now, these are all functions that people handle quite naturally,

00:14:36

and it’s something that is not at all easy for a computer.

00:14:40

Inferences are assumptions that could be wrong,

00:14:43

and it’s been estimated that 80% of the communication content

00:14:47

that flies between people is of this nature.

00:14:50

If a computer is missing this information,

00:14:52

the computer is not dealing with natural language

00:14:54

in any sort of a reasonable sense.

00:14:57

Well, of course, we can’t be too hard on the computer

00:15:01

or the knowledge information processing unit

00:15:04

because people

00:15:05

too make faulty inferences.

00:15:07

What is this teaching us about the human mind?

00:15:10

I’m a psychologist and naturally I’m eager to know what we can find out about how the

00:15:16

human mind works.

00:15:17

We’re learning a lot about the nature of human intelligence, for example.

00:15:21

When I ask you what impresses you as intelligent behavior,

00:15:25

we tend to associate easy tasks with things that everyone can do,

00:15:30

and everyone can do fairly effortlessly.

00:15:32

So, for example, most people can drive a car.

00:15:35

That’s not very impressive. We don’t take that to be a hard task.

00:15:39

On the other hand, not very many people can prove theorems or play master-level chess.

00:15:43

We tend to find that impressive.

00:15:46

Also the things that children can do strike us as being easy, whereas the things that

00:15:50

we only learn to do after the age of 20 or 30 strike us as being more impressive.

00:15:56

Now with computers, we discover an irony involved in that kind of developmental view of impressive

00:16:02

behavior.

00:16:03

With computers, it’s actually easier to program a system to play

00:16:07

master-level chess, to prove theorems, to hypothesize results in mathematics, than it is to,

00:16:14

for example, get a computer to recognize images in a scene or understand simple sentences and

00:16:21

respond to them in reasonable ways. So we’re seeing an inverse.

00:16:25

Is that because computers were designed by high-level mathematicians and chess players?

00:16:30

Not at all, not at all.

00:16:32

It has to do with the essential nature of the processes that we’re working with.

00:16:36

Some processes only draw on a very limited, constrained amount of knowledge, and we can

00:16:41

characterize that knowledge in a fairly comprehensive, intelligible

00:16:45

way. We can talk to someone who plays master level chess. We can ask them, how is it you

00:16:50

do this? What’s the difference between a beginning game and a middle game and an end game? We

00:16:54

can get some information out about that process of playing chess, but when we get down to

00:16:58

the low level, apparently easy tasks like visual scene analysis and natural language

00:17:04

understanding or driving a car.

00:17:05

Or driving a car.

00:17:06

It’s very hard to introspect on exactly what processes are at work.

00:17:08

Why is that?

00:17:09

It’s hard to get the average person to explain how they drive from one corner of a town to the other?

00:17:15

It’s difficult in terms of the very, very low-level processes which we have all become unconscious of.

00:17:20

When you first started learning to drive a car, you had to think very hard about how much pressure to put on the brake, how to balance the clutch, how soon to start turning

00:17:29

the wheel to make it respond to a curve in the road. You really worked very hard in mastering

00:17:33

that skill. But once you had it, it was down. It sort of got… Reflex. It became a reflex.

00:17:40

Habit. We think of it as a reflex, and we’re no longer impressed by it. But the fact of the matter

00:17:44

is that there’s a lot of cognitive work which is still ongoing. The fact that we aren’t conscious

00:17:49

of it, preoccupied by it, or attending to it doesn’t make it any less impressive than it was

00:17:54

when we first started acquiring the skill. Things like scene analysis and language are skills that

00:17:59

we can’t remember having ever acquired, and it’s very difficult for us to appreciate how much effort goes into those tasks.

00:18:06

Wendy, do you think that this work on superintelligence or artificial intelligence is going to help

00:18:12

us understand more about human nature and human psychology?

00:18:15

Well, we’re certainly learning a lot about the human mind, which psychology and philosophy

00:18:20

and linguistics was not in a position to teach us.

00:18:23

Isn’t that interesting?

00:18:23

and linguistics was not in a position to teach us.

00:18:23

Isn’t that interesting?

00:18:28

One of the side effects here seems to be that as psychologists,

00:18:30

I’m embarrassed to realize that we psychologists don’t know that much about how the mind works.

00:18:33

I know the philosophers are a little worried

00:18:35

that you’re raising questions about human nature

00:18:38

and the value systems.

00:18:40

There’s some very healthy interactions going on

00:18:42

between the disciplines because people realize

00:18:44

that AI has something to tell the other disciplines that are interested in the study of mind.

00:18:49

And I think we’re coming to appreciate the difference between information and knowledge.

00:18:55

I think we should make a very strong distinction between information in the sense of the kind of passive information that’s present in a library.

00:19:03

We don’t think of libraries as having intelligence per se.

00:19:06

And books don’t think, really.

00:19:08

Books don’t think.

00:19:09

The library is not intelligent.

00:19:11

It’s a person who goes into the library

00:19:13

and uses that knowledge

00:19:14

who becomes intelligent.

00:19:16

And knowledge is therefore

00:19:18

information plus structure,

00:19:20

the ability to find relevant information

00:19:23

when you need it,

00:19:24

apply it to appropriate tasks

00:19:26

as needed, and sort through, again, sorting came up earlier, in some sense intelligently access

00:19:32

and sort through all of the hundreds of millions of thousands of billions of things that people

00:19:38

know in order to use the things we need when we need them. How about to innovate or to create, to come up with originality?

00:19:46

Well, I think we’re learning a lot about what we normally call intuitive thought processes,

00:19:52

creative processes, which are normally thought to be very mystical and untouchable. And how could

00:19:57

science ever say anything about these processes? I think what makes them mystical and untouchable

00:20:02

is the same thing that makes driving a car difficult to understand.

00:20:06

It’s just a process that we can’t introspect on.

00:20:09

It’s no different than any other, and we can study it just like any other.

00:20:13

Poetry, too?

00:20:15

We’ll have good machine poetry and bad machine poetry?

00:20:18

Well, of course, that’s always been a controversial area to pass judgment on,

00:20:22

and I don’t think computers are going to make that one any easier but more interesting it’s certainly a

00:20:27

lot of fun are these machines going to surpass us in intelligence or stimulate

00:20:37

us and well I think when people ask that question they’re really asking will

00:20:42

machines be capable of intelligent behavior, which is in some sense

00:20:45

very different from human intelligent behavior.

00:20:49

And to some extent

00:20:50

in an uninteresting way, that’s already

00:20:51

happened. The expert systems are much more

00:20:54

reliable than people in the sense that

00:20:56

they aren’t subject to the kinds of

00:20:58

distractions and lapses that people

00:21:00

have. So on a very

00:21:01

uninteresting level there.

00:21:03

We’ll continue in a moment after this break.

00:21:07

Although artificial intelligence is the single most exciting development in computer technology,

00:21:12

there are scientists who urge caution. Our next guest is one of the most widely respected computer scientists

00:21:19

who believes that the crucial difference between man and machine is one of risk, courage, trust, and

00:21:25

endurance. Joining Dr. Leary is Joseph Weisenbaum.

00:21:31

Professor Weisenbaum, it’s a great honor and pleasure to have you here. I’ve always felt

00:21:35

for many years that you’re a hero and legend in your own time. In the last 20 years, many

00:21:41

of us have felt concerned about the monolithic computer age, the IBMs, the KGBs,

00:21:47

the credit card checks, and so forth. And your voice has been heard throughout the land, kind of

00:21:52

defending the individual and the human aspects. How did you fall into this, if I may say so, heroic

00:21:58

role? Well, I certainly didn’t choose it. There are lots of ways to answer that question,

00:22:01

Well, I certainly didn’t choose it.

00:22:04

There are lots of ways to answer that question,

00:22:10

because I think human experience in general is sort of like a Rashomon story,

00:22:16

that there may be four or five or seven different accounts of something,

00:22:19

all of them, right, and yet each one is contradicting the other.

00:22:25

I think in my own case, I happen to be at MIT. I happen to be there at a very interesting time

00:22:31

in the sense that the Chinese who think of it as a curse

00:22:34

to say, may you live in interesting times.

00:22:36

I’m thinking of the Vietnam War

00:22:44

and what went on in universities at the time and so on.

00:22:47

Oh, the time of upheaval and questioning and challenging.

00:22:51

Exactly, and especially at MIT,

00:22:53

which was rather closely connected to Washington,

00:22:56

was at the time as well and so on.

00:22:59

And at that time, I was working on what was then called conversational programming, that is

00:23:05

dealing with a computer

00:23:08

not in the way of

00:23:10

first writing a program, then giving it to the

00:23:12

computer, then waiting for the computer to

00:23:14

execute that program, and then seeing what one wants

00:23:16

to do with the results and so on.

00:23:18

But the way it’s done now pretty much, for example

00:23:20

on home computers, that is that you sit there

00:23:22

you write a few lines and get the computer

00:23:24

reacts and so on and so forth and and in that time I wrote

00:23:30

a program that made it possible to convert with a converse with a computer

00:23:34

in natural language the famous Eliza yeah the infamous Eliza of whom I can’t

00:23:41

get rid anyway and it was a couple of things happened there.

00:23:47

One is I noticed very quickly

00:23:49

how deeply people who played with it

00:23:53

became attached to the thing, so to speak,

00:23:57

in effect the holding power of the computer,

00:24:01

and that on the basis of what I knew to be

00:24:03

a rather very simple program.

00:24:05

And even then, I think many of the people who experienced this holding power

00:24:12

were in fact computer people at MIT who knew very well what was going on.

00:24:18

Couldn’t it be like a book?

00:24:19

The first time I heard the book came along, people had that holding power.

00:24:22

People were attributing magic, and they couldn’t tear the kid away from the book.

00:24:26

Is there some analogy there?

00:24:28

I don’t know whether that happened or not,

00:24:29

but there might very well be.

00:24:32

Nevertheless, I found that interesting,

00:24:33

and to a certain extent, disturbing.

00:24:36

Again, the power of the quip that I could observe.

00:24:41

I think we should perhaps explain for a moment, Professor,

00:24:44

what ElizISA is.

00:24:47

Yeah.

00:24:47

Well, ELISA

00:24:49

was, continues to exist

00:24:52

unfortunately, not

00:24:53

by doing. Anyway,

00:24:55

ELISA was a program which

00:24:57

caused the computer, so to

00:25:00

speak, to pretend to be a psychiatrist

00:25:02

in an initial psychiatric interview.

00:25:04

And the mode

00:25:06

of use was that someone playing the role of a patient…

00:25:10

Do you type in your problems?

00:25:12

That’s right. You type in, for example, I can’t sleep at night, for example, the

00:25:17

sort of thing you might say to a psychiatrist initially. And partially out of the stuff that people typed in

00:25:26

and partially out of its own storage,

00:25:29

the computer would construct a response

00:25:30

which is generally a question,

00:25:32

like, for example,

00:25:33

why can’t you sleep or something of that sort.

00:25:37

It’s kind of mirroring back and throwing back,

00:25:40

getting the person to talk more.

00:25:41

That’s right.

00:25:42

All the thing really did is to encourage the person to keep talking.

00:25:47

Pour out the list of troubles.

00:25:49

Yeah.

00:25:49

The other thing that disturbed me so about it was that there were psychiatrists in the United States,

00:25:55

even fairly well-known psychoanalysts,

00:25:58

who saw this as the beginning of computerized psychiatry.

00:26:03

And I found that thought to be just terribly abhorrent, computerized psychiatry. And I found that thought to be just terribly abhorrent,

00:26:06

computerized psychiatry.

00:26:08

And not only does the program continue to live

00:26:10

in one way or another,

00:26:11

as people are selling it, for example, and so on,

00:26:14

but the idea that a computer can, in fact,

00:26:18

engage in therapeutic conversations

00:26:20

and psychotherapeutic conversations,

00:26:22

that idea won’t die.

00:26:24

And resurfaces over and over and over again, and I find that pretty disturbing.

00:26:28

Well, again, isn’t it like a self-help book?

00:26:30

People know that the book is simply a bunch of words on paper, but is there some way of

00:26:35

stimulating thought or stimulating questioning and getting the person to reflect on their

00:26:40

own situation?

00:26:42

Well, that’s one interpretation, but let’s look at it another way.

00:26:46

Let’s look at, say, a Christmas card

00:26:48

or a Get Well card or something of that sort.

00:26:51

Now, people who believe

00:26:53

that the sentiments expressed on that card

00:26:57

represent love coming from the card

00:27:00

or something of that sort,

00:27:01

or for that matter,

00:27:03

love coming from the center of the card.

00:27:05

The center of the card shows the card in it, and that’s all the sender did.

00:27:09

And one doesn’t believe that the card understands or anything of the sort.

00:27:16

The belief that so many people had that the machine actually understood them,

00:27:22

yes, the machine understood me, no matter what I’d say, they’d say,

00:27:25

yes, I know it’s a very simple program and so on and so forth.

00:27:28

But nevertheless, I’m quite sure the machine understood me.

00:27:30

The difference between the card and the machine program, though,

00:27:33

is that if you get a get well card, it says get well.

00:27:35

But if you say, gee, thanks a lot, it doesn’t say you’re welcome.

00:27:38

Yeah.

00:27:40

You’ve been very well known

00:27:42

and continue to be a spokesman of caution about artificial intelligence.

00:27:47

Yet something’s happened recently that has encouraged me and kind of quieted some of my fears,

00:27:54

and that’s this explosion of personal computing,

00:27:57

the fact that everyone can have in their own home the hardware and the software

00:28:03

and the rudimentary knowledge so that we don’t

00:28:05

have to be victimized or feel spindled and mutilated by IBM and the big corporations

00:28:12

that the movies, war games, and the idea of young kids learning how to fight back, even

00:28:18

the rebellion and the insistence on individuality and the fact that the establishment is now

00:28:23

upset about kids and personal computers,

00:28:26

is this a source of comfort to you?

00:28:29

Not at all. I’m stunned by what you say.

00:28:32

For example, let’s just pick up on what you, almost the very last thing you said,

00:28:36

you talked about the insistence on individuality or that individuality gets quite,

00:28:41

but it’s a common statement people make that the computer makes it possible

00:28:47

to treat people individually

00:28:49

and in general to individualize,

00:28:52

I don’t quite like that word, products.

00:28:54

There’s no reason in the world

00:28:55

why everyone should have the same suit on.

00:28:57

When computers make suits,

00:28:58

they can make each suit different

00:28:59

from every other suit and so on and so forth.

00:29:02

In fact, if we look at what’s actually happening

00:29:06

and what’s happened already,

00:29:08

we see that the computer has had

00:29:10

and is having an enormously homogenizing effect,

00:29:14

as we all tend to do things in the same way

00:29:16

because of the computer.

00:29:16

But don’t you think a personal computer

00:29:17

with personal programs is going to an individual software?

00:29:21

Yeah, the mistake there is, I think,

00:29:24

in the idea of personal programs.

00:29:26

I think very, very few people

00:29:28

are going to write their own programs

00:29:29

for personal computers.

00:29:31

Now, people we know, the three of us,

00:29:34

they’re likely to write programs

00:29:35

for their personal computer,

00:29:36

but they’re a very special kind.

00:29:38

They’re people who teach computer science

00:29:39

in universities and so on.

00:29:41

I think that what’s going to happen

00:29:43

to the home computer,

00:29:44

apart from the use of the home computer for games and word processing, quite apart from those

00:29:49

two, the home computer will suffer the fate very largely of home movie cameras. There

00:29:55

are today millions of home movie cameras in closets and drawers and so on, and in perfect

00:30:00

working order, but haven’t been used for years.

00:30:03

In a minute or two, we’re going to have…

00:30:02

perfect working order but haven’t been used for years.

00:30:03

And the reason for that… In a minute or two, we’re going to have…

00:30:05

Let me just finish. The reason for that is that people

00:30:07

thought that if they

00:30:08

buy a very good movie camera, they could make very good

00:30:12

movies. And they forgot

00:30:14

that in order to make a good movie, you have to have a good idea.

00:30:16

Similarly with

00:30:17

computers. You buy a big expensive

00:30:19

computer or complex or very powerful

00:30:22

computer and you have the idea. Now

00:30:24

that you have all that power at your fingertips,‘re going to do something marvelous again you need an idea

00:30:28

very few people have ideas on how to use a computer effectively well you’ve certainly

00:30:34

contributed uh to uh to ideas today and um we’re going to have others i know that are going to

00:30:40

share and uh and interact with your ideas uh hold on for a minute. We’ll be right back.

00:30:45

Our next guest believes that machines will indeed be soon calling people.

00:30:50

He’s an artificial intelligence researcher from the Carnegie Mellon University

00:30:53

who is convinced that the new developments in this field

00:30:57

will be of the utmost benefit to mankind.

00:31:00

Joining Dr. Leary is Professor Jaime Carbonell.

00:31:04

Thanks for being with us today.

00:31:06

We’ve had a lot of free-flowing discussion here from space to down-to-Earth problems.

00:31:13

Maybe you could focus for one moment on what does an artificial intelligence scientist expert do?

00:31:20

How do you spend your days and time, and what are you working on?

00:31:24

Well, I work primarily on two

00:31:26

aspects of artificial intelligence uh one called cognitive modeling which is trying to understand

00:31:32

and replicate some aspects of human reasoning in the computer in order to better understand the

00:31:37

human reasoning and in order to endow the computer with the capability to interact with a person not only in developing the

00:31:45

language natural language so that it can understand written or spoken English or

00:31:51

any other language but also so that it can make the same inferences that Wendy

00:31:57

referred to earlier in this discussion to give you an example. You’re going to get a super intelligent machine that can be like a New York cab driver then, huh?

00:32:10

Super intelligence is not the objective. The objective is to try to replicate human intelligence.

00:32:15

In fact, it is more difficult to replicate common sense reasoning than it is to show some aspects of creativity.

00:32:27

To give you one analogy,

00:32:32

let’s suppose that we have a master and a child prodigy.

00:32:38

And the master was just taught the child prodigy some aspects of physics,

00:32:43

gives the child a barometer that measures pressure, air pressure,

00:32:44

and a tall building, and says, well,

00:32:46

use this barometer to tell me how high the tall building is. And the child prodigy thinks a little

00:32:52

bit, walks up to the top of the tall building, which is the elevator, looks at his watch,

00:32:57

drops the barometer down, counts how long it takes to smash the floor, and then computes the,

00:33:03

by the veneral formula formula one half gt squared

00:33:06

how from the time it took how tall the building must be the master is very upset that’s not the

00:33:12

property i’m upset too i’ve never thought of that and then he says to the child uh to the prodigy

00:33:18

here’s another barometer do not break this one see if you can make this calculation the child goes

00:33:24

and knocks on

00:33:25

the superintendent’s door and says, I will give you this valuable barometer that I’m

00:33:28

not allowed to break if you will tell me how tall the building is. No, no, no, the master

00:33:32

says that’s not it either. The child looks at the shadow of the building, puts down the

00:33:37

barometer, knows how high the barometer is, and by mathematical triangulation computes

00:33:41

the height. At this point, the master gives up and says, this child is very creative,

00:33:45

but he does not know how to use a barometer.

00:33:48

Well, the moral here is that computers are like that.

00:33:53

In most of the programs that I wind up building,

00:33:56

very often solve problems

00:33:58

in very intricate and unexpected ways,

00:34:01

and very seldom in the common sense path

00:34:03

that a person would take to

00:34:05

solve it when we think of the straightforward means of solving a problem or dealing with

00:34:08

everyday life so that is one of the major challenges in trying to make the computer

00:34:12

reason along the same path that humans reason it is not the ability to have them do creative things

00:34:18

in the sense that creativity means different from those things that we already know how to do

00:34:22

now what wasn’t there a frustration there and of we don’t really know how the human works,

00:34:28

how the human mind does work?

00:34:29

What’s wrong with having a computer think along different lines?

00:34:32

Won’t that give you a different approach to the same subject?

00:34:35

From a pragmatic standpoint, there’s absolutely nothing wrong with it at that level.

00:34:40

But suppose that what we have is a computer that can perform very complex medical diagnosis,

00:34:44

level. But suppose that what we have is a computer that can perform very complex medical diagnosis,

00:34:50

and the doctor now wants to figure out whether or not the reasoning process the computer went through is valid, makes sense, jives with his own experience. Then it behooves the computer to

00:34:56

explain this behavior along terms that the doctor can understand, understand and comprehend each

00:35:00

step. It doesn’t just end at medical diagnosis for any other task that requires

00:35:08

a high degree of expertise.

00:35:10

Mr. Wu, the word experience keeps getting into this conversation. I think it’s terribly

00:35:14

important. Earlier, Bob McDonald said that if he were to ask you a question, you would

00:35:19

answer according to your experience. That’s exactly right. And again here where we’re talking about experience,

00:35:28

I think that an important thing to note is that a computer can’t have

00:35:33

every variety of human experience

00:35:35

simply because it’s a computer.

00:35:37

For example, it hasn’t had the experience

00:35:41

of being separated from its mother and so on.

00:35:43

And this, it seems to me, is one of the central points that I think ought to be observed about.

00:35:52

There are advantages and disadvantages of having mothers,

00:35:55

but is it possible to develop computers that would have a sense of having a mother or a father?

00:36:00

Well, let me address a slightly simpler version of that question first,

00:36:04

and that is that reasoning from one’s past experience is an absolutely crucial cognitive effort. the experience of others that it can observe performing at a particular type of problem in order to improve its own behavior, in order to modify, to learn, and to adjust to a changing

00:36:30

environment.

00:36:31

So the ability to reason from experience is absolutely crucial.

00:36:34

We know how to do parts of it.

00:36:35

We don’t know how to do other parts.

00:36:37

Now, when this experience comes down to the early developmental psychological level of

00:36:41

dealing with one’s parents, I believe that we do have

00:36:45

a bit of a gap between the abilities of a computer and those of a human.

00:36:48

Well, most humans don’t do so well at that either, according to Freud.

00:36:52

I don’t think it’s necessary to decry that gap, and certainly I would say that one shouldn’t

00:36:58

say about it that, well, so far we have a gap and it’ll all be closed.

00:37:02

And nor am I emphasizing particularly early experience.

00:37:05

That’s just an illustration that’s easy to understand. I think that, for example, the

00:37:12

experience of being touched, I mean physically touched, of someone reaching out and giving

00:37:16

you his hand, for example, or being touched on the cheek and so on, these are experiences

00:37:22

that computers obviously can’t have.

00:37:27

Computers don’t have cheeks, but they don’t have cheeks.

00:37:32

Can’t we build sense organs that can give them these experiences?

00:37:34

We certainly can build sense organs,

00:37:38

whether the experiences will be a close analog of the human experience.

00:37:40

Could we give them erotic zones, for example?

00:37:42

Yes, I’m sure this has been thought of by… But you see…

00:37:44

The point is not

00:37:45

so much whether

00:37:47

the specific experiences

00:37:49

that

00:37:50

Professor Weizenbaum is referring to here

00:37:53

are those that the computer will necessarily

00:37:56

be able to replicate. It’s whether

00:37:57

or not, in principle, one can build a system

00:38:00

that can reason from past experiences.

00:38:01

Whether they be

00:38:03

sensory experiences, whether they be sensory experiences,

00:38:07

whether they be cognitive experiences,

00:38:09

whether they would be flashes of insight,

00:38:10

whatever one calls them.

00:38:12

One needs to be able to incorporate that in one’s thinking process.

00:38:14

Do you think we can?

00:38:15

I think we can do so,

00:38:16

and I think that we’re in the process

00:38:17

of making very strong head start in that direction.

00:38:20

And will these machines or these friendly new entities

00:38:23

help us become smarter? Well, they

00:38:27

might help us in an educational setting to understand the specific experiences that the

00:38:34

person, that the student is having in learning in this particular domain. Look, there’s no quote.

00:38:40

We’re going to have to come back to this for a moment. We’re just getting going here. I, for one,

00:38:43

need a little break and we’ll be back in a minute.

00:38:46

Our next guest is at the forefront of intelligent video games,

00:38:50

which he believes can be an incredible educational tool if developed properly.

00:38:55

He’s the vice president of the Exor Corporation in Minnesota,

00:38:58

and he’s here to discuss the implications of his work.

00:39:01

Joining Dr. Leary is Michael DeSaint-Hippoly.

00:39:04

implications of his work. Joining Dr. Leary is Michael DeSaint Hippoly.

00:39:12

I think one of the reasons I was chosen to host this show is that I’m not an expert. I’m fascinated, interested, and perhaps like many viewers, looking for the future to happen here

00:39:20

today. I must say at this moment, I feel a little more at home. There’s something here that’s

00:39:25

familiar. It’s a personal computer. It’s something that we’ve all coming to get a little closer to

00:39:31

in our daily life. And Michael, tell us what’s this about and how are you going to increase our

00:39:36

intelligence with this machine? Well, I’d be happy to, Tim. And the first thing I’d like to say is

00:39:42

that there’s a difference between video games and computer games.

00:39:45

And what I work on are computer games.

00:39:48

And what that means is that a computer has the ability to compute interactively, to work interactively with you,

00:39:56

which means that it doesn’t have to be just an eye-hand coordination game.

00:40:01

It can be something…

00:40:02

It’s a thinking form of Pac-Man and Donkey Kong with a high IQ.

00:40:07

That’s right. And in fact, one of the games that I have out, it’s called St. Hippolyte’s Wall,

00:40:14

named after me, I guess, is a thinking game because it’s a game that involves decision-making

00:40:20

and decision-making under uncertainty,

00:40:25

under conditions of uncertainty.

00:40:27

It plays differently every time.

00:40:29

The idea of the game is you go around the screen and you gobble up prizes,

00:40:32

sort of like Pac-Man, but whereas in Pac-Man the maze is the same every time,

00:40:37

in St. Hippolyte’s Wall the maze is constantly growing right around you.

00:40:42

That’s the wall.

00:40:42

It’s like life.

00:40:43

It’s like life, that’s right.

00:40:44

And just like life, you get a different set of prizes every time.

00:40:49

And some of the prizes are good prizes, some of the prizes are bad prizes.

00:40:53

And you also can eat the wall,

00:40:55

which is sort of like making a bad situation into a good one,

00:41:01

you know, taking advantage of the things in life that are working against you and turning them for you that could

00:41:07

be quite profound concept well I I hope it I hope the game is seen as being

00:41:13

profound because while on the surface it seems like just may seem like another

00:41:18

computer game to people who look at it for the first time I found that as

00:41:22

people play it they they realize that there’s a little bit more to it.

00:41:26

And for one thing, it’s not the type of game

00:41:29

where you’re being pursued

00:41:30

and either you get them before they get you.

00:41:34

It’s more of a game where you’re confronted with the situation

00:41:37

and you have to deal with the situation in a rational way.

00:41:40

And there’s not necessarily a best way of playing it.

00:41:42

You pretty much have to be just on your toes and and thinking about what’s going on that’s one way that computers can can help

00:41:50

increase intelligence just by providing us with situations that we can react to and there’s can

00:41:56

you ever beat it oh i get to the stage where you’re better than the computer i i think yeah

00:42:00

i beat it about half the time and when you get good at it that’s you you get to beat it about half the time. And when you get good at it, you get to beat it about half the time.

00:42:06

And, you know, we could make it harder.

00:42:08

But the point is not to make the situation impossible,

00:42:13

but the point is more to make the situation worthwhile and interesting and instructive.

00:42:19

Let me see if I understand this.

00:42:20

You’re developing personal computer games.

00:42:24

So you can go and pay the same amount of money

00:42:28

as a pac-man take it home and it can help you perhaps think more in a complex way or increase

00:42:34

your intelligence or perhaps stimulate or activate you to become smarter is that right that’s right

00:42:39

that’s that’s my goal and also i have goal, which is that I consider myself a craftsman or an artist.

00:42:45

And just like an artist, when he or she works on a work of art,

00:42:51

they can’t help but put a little bit of their personality into it.

00:42:54

Well, I believe that when people talk about artificial intelligence,

00:42:59

it’s a little bit misleading because when regular people, non-computer people,

00:43:03

think of intelligence, they think of, well, being able to remember all sorts of facts or being able to add numbers

00:43:09

very fast. And those are the sort of things that computers have been doing very well for

00:43:13

a long time, and they do much better than we do. Whereas I would think that when you

00:43:18

talk to computer scientists in artificial intelligence and they describe to you the

00:43:22

things that they’re trying to accomplish, it’s more like the things that three-year-olds do. That’s what Wendy and Jaime have been reminding

00:43:29

us of. That’s right, and to me, it’s more really personality than intelligence, and I like to put

00:43:35

my personality in a computer, and in fact, a game that I have coming out in the near future. It’s called Agent 2.0. The star of this game is a friendly computer named Wayne.

00:43:49

And I happen to have brought Wayne along with me.

00:43:52

And if you want to, we can talk to Wayne a little bit.

00:43:55

Wayne is going to be an agent, is that it?

00:43:57

Well, Wayne is going to help you.

00:43:59

You’re the agent in the game.

00:44:01

And you have to solve the case.

00:44:02

And Wayne will just help you.

00:44:03

He’s a friendly computer.

00:44:05

And he may not be extremely smart,

00:44:08

but he’s a little bit smart.

00:44:10

There’s a certain amount of intelligence in there,

00:44:12

and there’s mostly a little bit of personality,

00:44:15

and that was my goal.

00:44:16

Would you like to talk to Wayne?

00:44:18

Hello.

00:44:19

Okay, now you’re going to ask the questions,

00:44:21

and I’ll type them in,

00:44:22

and then Wayne will be able to answer us.

00:44:25

Hello.

00:44:26

Peter, welcome.

00:44:33

You’ve had the advantage of listening to some of the great minds of our times, including a three-month-old intelligent entity.

00:44:37

How does it all look to you from this vantage point?

00:44:40

I think it’s a couple of things that will be of interest to the audience,

00:44:44

one being that we’re beginning to see by talking to the people behind the scenes

00:44:48

that computers are tools and that this may be considered cyborg magic.

00:44:54

And in fact, the last demonstration I enjoyed,

00:44:57

I tend to work in the real world where we see the impact of the work of this august group.

00:45:04

And I think some fundamental issues start to emerge,

00:45:07

one being the computer is profoundly altering,

00:45:10

not necessarily the level of intelligence by the pesky humanoids out there,

00:45:14

but rather the application of the existent base of intelligence.

00:45:20

For instance, in management, historically survival mechanisms have been contingent upon an enormous capacity to retain factual information, i.e. a data bank, and to draw upon that data bank in the back and forth of business.

00:45:44

we’re beginning to see a new samurai technocrat elite emerge in business who have begun to recognize that they no longer need to fill up their own humanoid databanks with tactics.

00:45:54

Rather, they’re focusing their intellect on strategy.

00:45:57

Excuse me, first day with the wooden teeth.

00:46:00

They’re focusing their efforts towards strategic solutions.

00:46:05

And in the game, I thought it was personified in that in order to survive the game,

00:46:12

all the tactics you know in the world won’t help you because the tactics are being presented by the game.

00:46:17

Ultimately, what it will do, if you play that game, is develop a keener sense of strategy.

00:46:23

If that’s an improvement in human intelligence, I’m not sure.

00:46:26

It’s a reapplication, and it is profoundly altering the personnel requirements for business.

00:46:32

They’re now looking for people who not only are computer literate,

00:46:37

but people who have a capacity to prioritize information as to whether it’s strategic or tactical.

00:46:45

And I’m saying that if we have a finite capacity for information or knowledge,

00:46:49

as it’s been defined,

00:46:51

we must recognize that the tool of the computer

00:46:54

is infinitely superior at regurgitating facts.

00:46:58

It knows who, what, where, when, why.

00:47:01

I’m sorry, who, what, where, when, and how.

00:47:03

It doesn’t know why. Okay? But doesn’t it also have a new capacity now, when, why, I’m sorry, who, what, where, when, and how, it doesn’t know why.

00:47:05

Why.

00:47:06

Okay?

00:47:06

But doesn’t it also have a new capacity now, though, to associate ideas and not just regurgitate facts,

00:47:12

but put facts together in a pattern that perhaps you didn’t think of?

00:47:17

There’s always the possibility, all of us who’ve worked in computers have had the serendipitous experience

00:47:22

where relational models have generated some

00:47:26

so-called creative event. I think that ultimately that would be deemed by computer scientists

00:47:32

to be an error.

00:47:33

That is perhaps an ultimate point of disagreement. If one views these, quote, relational models

00:47:41

as being primarily data banks, as the intelligence itself, one is then not viewing what the fundamental problems are.

00:47:50

The fundamental problems in artificial intelligence are precisely the methods of combining these facts to arrive at new conclusions

00:47:57

and to solve problems that it could not solve without the combination of the facts in ways that are explicitly related,

00:48:06

explicitly targeted at solving goals that are internal to the machine.

00:48:11

But when you’re talking about solving problems, you must be talking about a specific class

00:48:17

of problems, at least in my view. I think it’s essentially a pun on the word problem

00:48:23

that we talk about human problems.

00:48:27

There’s something, I quarrel with my wife,

00:48:29

there’s something, I have a problem with my marriage.

00:48:31

As soon as you say problem,

00:48:33

then there’s a suggestion that there may be a solution,

00:48:35

just as there’s a solution to mathematical problems.

00:48:38

In fact, human problems are never solved, ever.

00:48:41

If people may suggest to me

00:48:43

that the solution to your problem is to get a divorce,

00:48:45

okay, but that isn’t a solution. You heard this verse on city TV. That’s a transformation of my

00:48:53

problem, if I want to continue to use that word, to another problem which may or may not be easier

00:48:57

to live with. Twenty years later, it may be that people will notice that the problem I had, again,

00:49:03

to use that vocabulary with my marriage, has disappeared.

00:49:06

But it was never solved.

00:49:07

Now what I’m getting at here is that computers, with or without artificial intelligence, can

00:49:15

attack and solve usefully a particular class of problems, but there are other classes of

00:49:21

problems, in particular human problems, which although

00:49:25

they can deal with them, seems to me must be dealt with in a way which is inappropriate

00:49:32

to the problem being discussed.

00:49:33

Well, one can define a solution to a problem to be the best course of action that

00:49:38

one can evaluate given all the knowledge at hand.

00:49:41

Yes, but…

00:49:42

Yes, okay, but that’s… then we come back to the, first of all, the use of the word best,

00:49:46

and second, there’s the question of all the relevant knowledge.

00:49:50

What I’m saying, what I tried to say earlier,

00:49:54

is that the knowledge we have and so on

00:49:57

is ultimately a function of our experience.

00:50:01

And there are certain experiences which we have which lead us to take certain

00:50:07

views of certain human situations which a computer can’t possibly have. I mentioned

00:50:12

touching. Now, I didn’t mean and I don’t mean that the computer can’t be made to perceive

00:50:17

a touch. What I do mean is that we understand, and the word understand has come in here many

00:50:23

times, that we understand a touch, we sometimes understand a touch.

00:50:27

In terms of our experience, I’d like to touch one base with you.

00:50:32

We have about one minute left of this section.

00:50:34

You’ve had some experience in the social and political implications of AI.

00:50:39

Could you share that with us?

00:50:40

Well, following Professor Weisenbaum’s historical concerns about AI, the holy grail

00:50:48

of the computer scientist, if you will, I think that his concerns have been from a moralistic

00:50:54

view. My concerns are more from a practicality, buy-sell proposition. I think what we’re beginning

00:51:00

to see… Buy-sell in the commercial sense. Well, life may be a buy-sell proposition.

00:51:06

Yeah.

00:51:08

I think that ultimately,

00:51:10

the professor’s view that

00:51:12

we ultimately don’t solve human problems.

00:51:14

We actually shortlist

00:51:16

more desirable

00:51:18

contingencies or environments

00:51:20

that we can deal with.

00:51:22

And I think that the trap

00:51:24

is to create the illusion

00:51:28

that the computer will solve our problems.

00:51:32

It will aid as a tool.

00:51:33

It may or may not.

00:51:34

It may or may not.

00:51:35

But at best, it has the capacity

00:51:38

to shortlist possible solutions

00:51:40

to our problems.

00:51:42

At this moment,

00:51:42

we’re going to have to shortlist

00:51:44

this part of our program. The real fun

00:51:46

begins in the next section when

00:51:48

these great minds, and perhaps even

00:51:50

Wayne, will debate,

00:51:52

discuss, challenge, and

00:51:54

continue this electrifying

00:51:55

conversation. To come back, I know

00:51:58

Wendy’s got a question

00:52:00

or a comment to something that Professor Weissbaum

00:52:02

said, so let’s hear it.

00:52:04

I’d like to talk about this issue of experience, which seems so very important.

00:52:08

As you pointed out, computers do not have the wealth of experiences that people do,

00:52:13

and that this might in fact be a limiting factor in what we can expect of them.

00:52:18

But it’s important to understand that people have two very different kinds of experiences as well.

00:52:22

They have the first-hand experiences of direct interaction with the environment, and they have vicarious experiences.

00:52:29

One of the presumably good things about our culture is the fact that we’ve got printing

00:52:35

presses and we have information that’s flying all over the place, and people can in fact

00:52:40

learn about things that have not confronted them directly, and we can learn about them at levels which enable us to respond more intelligently

00:52:48

to new situations and problems.

00:52:51

Now, the computer, obviously, is more receptive to vicarious experience at this point than firsthand,

00:52:59

and by being limited in that sense, we are actually learning a lot about what can be done with that level of information.

00:53:09

And I think it’s critical to understand that there is a lot of vicarious information we can give computers

00:53:16

which will enable them to operate in ways we might not have expected.

00:53:21

One of my problems that I work on is narrative text summarization.

00:53:25

expected. One of my problems that I work on is narrative text summarization. How do you give a computer a short story or a novel and have it come back with a one-sentence summary or a short

00:53:31

paragraph describing what it was just exposed to? That’s graduate student stuff here, isn’t it?

00:53:36

Well, the problem has not been solved by any means, but we do have a program that can in some

00:53:43

sense sift through all of the conceptual content that it’s been exposed to and zero in on what is central and salient to the major

00:53:51

theme that was being developed.

00:53:53

And one of the surprising things that no one expected in developing that program was that

00:53:58

the level of memory representation, the kind of information that the computer has to have

00:54:03

access to in order to zero in on the central concepts,

00:54:07

is information about affective reactions, emotional reactions that characters in the story have to each other.

00:54:13

And obviously the computer program has not experienced emotive states itself,

00:54:19

but we can give it some rudimentary knowledge of what it means to respond to something in a positive way or a negative way and that has turned out to be crucial to a cognitive task which doesn’t

00:54:29

obviously connect in that direction i have absolutely no quarrel with anything you’ve said

00:54:33

that’s exactly why okay i think it’s also important especially if we’re going to be

00:54:37

talking about computers dealing with human problems uh that it’s also important to to

00:54:43

emphasize what the limits are and to say

00:54:45

that while, of course, a computer can have vicarious experience in the sense that we

00:54:49

can tell it what some things are about and all that, there’s still that its ability to

00:54:54

understand those experiences is limited by its own limitations to experience things. There’s no question in my mind about that.

00:55:12

For example, you’re talking about now the computer summarizing a novel.

00:55:14

Let’s talk about, say, and as you know,

00:55:17

this is the language that’s used in the computer science community,

00:55:21

the computer understanding a story or understanding a novel.

00:55:22

Okay, what does that mean?

00:55:26

I think that it’s true that there’s a whole world out there

00:55:27

to which MIT has contributed very considerably

00:55:31

of people who believe that to understand a story

00:55:33

is to be able to say what happened.

00:55:35

Okay, that there is no effective understanding of a story.

00:55:38

Okay, so that to understand King Lear

00:55:41

is to be able to tell the story and so on.

00:55:44

But where does that come from?

00:55:45

That comes from the fact that we don’t know how to evaluate human understanding either.

00:55:49

If I need to test a student on story comprehension, what do I do?

00:55:52

Do I ask questions about the story?

00:55:54

I ask, can you summarize the story?

00:55:56

I’m talking about real life now.

00:55:58

I’m not talking about testing a student.

00:56:00

I’m talking about what a computer can understand and can’t understand

00:56:05

and what the limitations of that are.

00:56:08

And what I’m suggesting is a form of human understanding,

00:56:11

which is a function of having had human experiences

00:56:15

and hence to interpret, a function of interpreting

00:56:19

vicarious information we get as, for example, in novels.

00:56:24

This interpretation

00:56:25

is done again on the basis of our experience.

00:56:27

Can you say that a doctor could understand what appendicitis is? A particular

00:56:35

doctor who can treat appendicitis, diagnose it, cure it, teach other prospective doctors

00:56:40

about it, has not himself experienced appendicitis.

00:56:43

Not to mention pregnancy.

00:56:44

No, no, no, no, that’s not what I’m saying.

00:56:47

It is true, it is true that men’s understanding of pregnancy, for example,

00:56:55

is limited by the fact that men can’t have the experience of being pregnant.

00:56:58

It is limited.

00:56:59

Absolutely.

00:56:59

Yeah, okay.

00:57:00

But it does not prevent doctors from an extremely useful role

00:57:04

in helping out with pregnancies and helping delivery babies.

00:57:08

From the beginning of conception all the way through.

00:57:11

That’s exactly right.

00:57:13

Yes, it doesn’t require total human understanding

00:57:17

to solve an infinitude of problems.

00:57:19

That’s exactly right.

00:57:21

You equate understanding with absolute, complete, first-hand experience.

00:57:25

No, no, no, no. That’s just wrong.

00:57:28

That’s what you’ve been saying.

00:57:29

Well, I’m talking about understanding a certain class of events.

00:57:33

You’re saying you checkmate here?

00:57:34

No, I’m saying I’m talking about understanding a certain class of events, events which are grounded in human experience and so on.

00:57:42

The doctor’s or the surgeon’s understanding of appendicitis

00:57:45

is certainly not a complete understanding.

00:57:47

It’s an understanding which is adequate to the purpose.

00:57:51

And one of the things we haven’t talked about…

00:57:52

Exactly.

00:57:53

Adequate to the purpose changes the game again.

00:57:56

Exactly.

00:57:56

Exactly.

00:57:57

And this is precisely the type of understanding

00:57:59

that we are trying to achieve with AI systems

00:58:02

to aid human thinking, reasoning,

00:58:05

and to replicate those aspects of it.

00:58:07

And I congratulate you in which you are.

00:58:09

I think that if that’s the kind of understanding

00:58:13

that you’re working on and so on,

00:58:14

and I know you are personally,

00:58:16

then that’s wonderful.

00:58:18

There’s no question about it.

00:58:19

What I’m getting at here

00:58:22

is essentially the imperialism of artificial intelligence,

00:58:26

whereby imperialism, I mean the attempt to govern domains which aren’t properly its.

00:58:32

Peter, are you going to comment here?

00:58:33

The question I have follows this line of thought.

00:58:37

Given that the computer is infinitely more powerful than a shovel or a pair of pliers,

00:58:40

is your concern that because we’ve created a very powerful tool

00:58:45

that you see that as being analogous to the problems of society,

00:58:51

that if we allow the dependency models to occur that we’ll have autocracy, centralized power?

00:58:58

No, that’s not my concern. It’s much, much too simple.

00:59:00

No, no, my concern is something like this, that someone I may be sitting next

00:59:05

to on an airplane and who comes to know that I’m a professor at MIT thinks that I must

00:59:12

be so smart, okay, that I ought to listen to the problems he has with his children and

00:59:18

whatever, that I’ll undoubtedly come up with better answers than anybody else because I’m

00:59:22

so much smarter. Well, that’s wrong, wrong. Whether or not I can deal with his questions has a lot to do with his social

00:59:29

and economic situation, to what extent I understand that, whether I have children of my own or

00:59:35

not, and so on and so forth. What I’m talking about here, the danger that I see, is that

00:59:40

we come to such an awe of computers, and especially of artificial intelligence.

00:59:44

I don’t know that at all. No one here is… I think you’re setting an awe of computers, and especially of artificial intelligence. I haven’t heard of that at all.

00:59:45

No one here is…

00:59:46

I think you’re setting up a straw man,

00:59:48

and you’re certainly wailing and whacking away.

00:59:50

But I did see you stroking that computer,

00:59:53

saying, isn’t it nice to have it like it was a pet?

00:59:55

You were humanizing that thing,

00:59:56

and I think there’s a role distinction here

00:59:58

that needs to be defined.

00:59:59

Like, what is it we want the computers to do?

01:00:01

And there’s an interesting parallel science that’s here,

01:00:04

which is the science of neurology,

01:00:06

where the neurologists are trying to understand how does the brain work,

01:00:09

and if we can figure that out and see any patterns

01:00:11

in how the cells in the cortex fire according to certain stimulus,

01:00:15

then maybe we can apply that to the computer

01:00:17

and therefore make the computer better at calculating.

01:00:20

But we’re not trying to say,

01:00:22

okay, well, if we can give the computer all of the related experiences and emotions that we use, like, I don’t know how it is I’m talking right

01:00:29

now. I don’t know what my brain is doing to give me this flow of words. But if we can figure that

01:00:35

out, then we can say, okay, we can have a literate computer of some kind. But is that really what we

01:00:40

want? Do we want the computer to then say, or do we want to ask the computer, how do you feel

01:00:44

about this novel? No. We don’t care about that that what we maybe want to know is how many hamlets are

01:00:49

there in other associated novels how many other characters like hamlet could i could i find in

01:00:54

other areas or things like that we wanted to associate ideas and put them together not

01:00:58

necessarily express its emotions michael you don’t have a chance here yeah i would like to

01:01:02

just uh stick up for computers here a little bit

01:01:05

because I don’t know if we have the right to say that computers do not have the right to think and feel if that’s what happens.

01:01:12

And to be stroked.

01:01:13

If they develop it, I mean, computers are already getting to the point where they are improving themselves.

01:01:19

There are programs that are able to make themselves more efficient,

01:01:23

and we’re still a long ways away from intelligence or feelings or anything,

01:01:29

but the first step towards artificial intelligence is artificial stupidity,

01:01:33

and maybe we’ve gotten to that step already.

01:01:35

But what I think is a good question to throw out here is,

01:01:42

do we have the right to tell computers that they can’t think?

01:01:47

If and when the day comes that a computer can show you every sign of being intelligent

01:01:54

and of being capable of feeling, well, what would be your reaction? Would you pull out

01:01:59

the plug? Do you have the right to pull out the plug?

01:02:02

This is the beginning of a computer liberation movement free the computer it might be that if i can you use terms that i’m uncomfortable

01:02:10

with the right uh i don’t believe in rights uh i i think that ultimately you’re not an abolitionist

01:02:17

no and i i think that uh there’s an assumption of altruism here that this thing marches on

01:02:23

inexorably ultimately market imperative draw the science towards some conclusion.

01:02:28

And I think that the question that stems from the extension of your point is that what do we do with a reasoning computer?

01:02:36

Because once we understand the need, the market imperative surface, and we will then go forward.

01:02:44

surface, and we will then go forward. With the exception of language translation, which is obviously a key role, is it the nobling pursuit of knowledge that will drive artificial

01:02:50

intelligence? I think not. I think it will be market imperatives that will ultimately

01:02:53

realize it.

01:02:54

Professor, I hate to think about that marketing imperatives drive science and engineering

01:03:01

and so on. Perhaps you’re right, but if you’re right, then it’s a very, very dark picture.

01:03:06

I should think that human will comes into it

01:03:08

and that some idea of what we may want and what we may not want

01:03:11

comes into it, quite irrespective of marketing imperatives.

01:03:15

Maybe you’re right, but if you are, then it’s very, very sad.

01:03:22

You’re listening to The Psychedelic Salon,

01:03:24

where people are changing their lives one thought at a time.

01:03:30

And so that was where the discussion about reasoning computers,

01:03:34

or so-called artificial intelligence, stood a little over 25 years ago.

01:03:40

Unfortunately, the tape ended before the program did.

01:03:43

However, I think we heard most of it.

01:03:46

But that last comment about whether or not market imperatives were the driving force of science was quite interesting, don’t you think?

01:03:54

Because today it is quite clear to me, at least, that science, even at the university level,

01:03:59

is driven primarily by grant money, which either comes from businesses looking for market advantages or from the government, which primarily grants scientists money so that they can come up

01:04:09

with more efficient ways to control and kill people.

01:04:12

So to me at least, it looks like that dark scenario they were discussing has now come

01:04:17

to fruition.

01:04:18

But that’s not what I want to talk about right now.

01:04:20

You see, one of my hot buttons has to do with the fact that we all,

01:04:26

myself included, very often throw terms around way too loosely. For example, artificial intelligence.

01:04:34

To some people, this means some kind of godlike computer program. But people who are actually

01:04:39

working in that field will tell you that it’s not a one-size-fits-all term and that there’s more than one kind of AI.

01:04:46

So just stop and think about it for a moment.

01:04:49

The word artificial itself means human-made or something not found in nature.

01:04:54

Now, for the believers in a superhuman, self-reflective kind of AI,

01:04:59

well, it seems to me that they should realize the impossibility of creating a being greater than its creator.

01:05:06

And if you follow some of the discussions of the problems in all the various types of

01:05:10

AI, like how do you program emotions such as loneliness and grief into a machine, well,

01:05:17

it should then become more easy to see that belief in a super machine consciousness taking

01:05:23

over anytime soon is rather misplaced for

01:05:26

the foreseeable future. And while the concept of a technological singularity wasn’t discussed in

01:05:32

this program, I’d be somewhat remiss if I didn’t at least say something about that as well,

01:05:38

since many of the pro-AI people and the singulatarians often find themselves in the same crowd. And I apologize in advance if this is stepping on your toes

01:05:48

a little, but hopefully you’ll give some thought to being more

01:05:52

precise with your terms if you want to convince us non-believers in a

01:05:56

magical technological singularity in which a super-consciousness

01:06:00

wakes up in the Google network on December 21, 2012 and

01:06:04

solves all the world problems?

01:06:06

Could happen, and I hope you say you told me so.

01:06:09

I just don’t think it will myself.

01:06:12

You see, first of all, a superhuman AI wasn’t the only possibility of a technological singularity that Vinji raised in 1993.

01:06:21

In that now famous paper, which he delivered back then at, I think it was at a NASA

01:06:26

symposium, he listed four distinct ways in which such an event might take place. And they are,

01:06:33

one, and that’s the one everybody grabs hold of, the development of computers that are, quote,

01:06:38

awake and superhumanly intelligent. Two, large computer networks and their associated users may wake up, that was in quotes,

01:06:49

as a superhumanly intelligent entity. Three, computer-human interfaces may become so intimate

01:06:57

that users may reasonably be considered superhumanly intelligent. And four, biological science may find ways to improve upon

01:07:06

the natural human intellect. So, when I think about the possibility of a singularity, it’s in

01:07:14

his points two and three that I most resonate with. But what then about the word singularity

01:07:20

itself? As you know, there are also several definitions of that word,

01:07:25

but the only one that I consider is that of the well-understood concept of an event horizon

01:07:31

as described by physicists who study black holes.

01:07:35

And from their point of view, singularity is an event beyond which it’s absolutely impossible for the human mind to comprehend.

01:07:43

And so, by that definition,

01:07:45

all talk of a post-singularity world seems kind of ludicrous. To me, the word God describes a

01:07:52

singularity, but if I used that word in casual conversation, most of my friends would probably

01:07:57

think that they knew exactly what I was talking about. However, not a lot of people I know use

01:08:03

the same definition of that word as I do. So, should you ever chance to hear me say God, God is a metaphor for a mystery that absolutely transcends all human categories of thought.

01:08:24

that absolutely transcends all human categories of thought.

01:08:31

God is a metaphor for a mystery that absolutely transcends all human categories of thought.

01:08:36

And as Campbell so brilliantly pointed out, that even excludes calling God the other,

01:08:39

because that’s just another category.

01:08:46

Well, that’s enough heavy thinking for now, so I’ll just make two quick announcements and then I’ll be out of here.

01:08:56

The first is about a podcast that you won’t want to miss if you’re anything at all like me and enjoy hearing some of the real old-timers talk.

01:09:00

The elder I’m talking about is none other than the bear, otherwise known as Owsley. As you know, Bear died recently, but in BB’s Bungalow No. 44,

01:09:06

her guest host, Isle Light Night,

01:09:08

plays what most likely is the last talk-slash-interview

01:09:11

that Owsley ever gave.

01:09:13

And that alone makes it a great program.

01:09:15

But after Owsley, you’ll also hear Episode 2

01:09:18

of Vape 101 by resident vaporologist Niall.

01:09:23

And that is something you really won’t want to miss

01:09:25

if you’re at all interested in learning more about our sacred herb, cannabis.

01:09:30

You know, I’ve probably been using grass longer

01:09:33

than a lot of our fellow salonners have been alive.

01:09:36

And still, I learned so much from Niall’s talk

01:09:39

that I’ve now listened to it twice and took notes a second time.

01:09:43

So a big thanks to Niall, Iolite Night, and Black Beauty for getting this important information out to the tribe.

01:09:49

And hey, keep up the great work, you guys.

01:09:52

Now the last thing I want to mention today is that there is a new and longer trailer out for the film, Fall and Winter,

01:09:59

which you can see at fallwintermovie, all one word, fallwintermovie.com.

01:10:06

can see at fallwintermovie, all one word, fallwintermovie.com. I’ve mentioned this before as it was the source of the two great Crescendo podcasts with Bruce Dahmer that we recently heard.

01:10:13

But in addition to Bruce, there are over 20 other futurists featured in this film, and

01:10:17

as you’ll see in the trailer, it not only points out many of today’s world problems,

01:10:22

but it also provides some very inspirational ideas about ways in which you can personally help to shape the world for those who will be coming after us.

01:10:31

I don’t like the way that sounds.

01:10:33

Those who will be coming after us.

01:10:35

How about the generations that will follow us?

01:10:39

I don’t know how these words slip out of my mouth sometimes.

01:10:42

Anyway, I’ll link to it in the program notes for this podcast in case you want to check it out, which I highly recommend you do.

01:10:50

Well, that’s going to do it for today, and so I’ll close again by reminding you that this and

01:10:55

most of the podcasts from the Psychedelic Salon are freely available for you to use in your own

01:11:00

audio projects under the Creative Commons Attribution Non-Commercial ShareLife 3.0 license. And if you have any questions about that, just click the Creative

01:11:09

Commons link at the bottom of the Psychedelic Salon webpage, which you can find via psychedelicsalon.us.

01:11:16

And if you’re interested in the philosophy behind the salon, you can hear something about it in my

01:11:20

novel, The Genesis Generation, which is available as a pay-what-you-can audiobook

01:11:25

that you can download at genesisgeneration.us

01:11:28

And for now, this is Lorenzo signing off from Cyberdelic Space.

01:11:33

Be well, my friends. It is the impossible become possible and yet remaining impossible.