Program Notes

Guest speakers: Terence McKenna, Ralph Abraham, and Rupert Sheldrake

[NOTE: The following quotations are by Terence McKenna.]

“Because of this fact, that clear thinking can be mathematically formalized, there is a potential bridge between ourselves and calculating machinery.”

“Good thinking, whether you’ve ever studied mathematics for a moment or not, can be formally defined.”

“What is important about nature is that it is information. And the real tension is not between matter and spirit, or time and space, the real tension is between information and nonsense.”

“As our understanding of the machinery, the genetic machinery that supports organic being deepens, and as our ability to manipulate at the atomic and molecular level also proceeds apace, we are on the brink of the possible emergence of some kind of alien intelligence of a sort we did not anticipate.”

“Vast amounts of the world that we call human is already under the control of artificial intelligences, including very vital parts of our political and social dynamo.”

“While we’ve been waiting for the Palaidians to descend, or for the Face on Mars to be confirmed, all the machines around us, the cybernetic devices around us in the past ten years have quietly crossed the threshold into telepathy.”

“[Artificial intelligence,] this most bizzare and most unexpected of all companions to our historical journey is now, if not already in existence, then certainly in gestation.”

“Time is defined by how much goes on in a given moment, and we’re learning how to push tetraflops of operations into a given second.”

“Surely in a hundred years, a thousand years, a million years we, if we exist, will be utterly unrecognizeable to ourselves, and we will probably still be worried about preserving and enhancing the quality of human values.”

[NOTE: The following quotations are by Ralph Abraham.]

“The very fact that we are at a hinge of history means that what we say and think, even individually, matters enormously in the long run. That’s the teaching, if there is any, of chaos theory.”

“In the creation of societies it was altruism, essentially, that was involved in going from where we were to where we are, and it could well be that without love, for example, further evolution is impossible.”

Previous Episode

228 - Trialogue_ The Evolutionary Mind Part 1

Next Episode

230 - Trialogue_ The Evolutionary Mind Part 3

Similar Episodes

Transcript

00:00:00

Greetings from cyberdelic space.

00:00:19

This is Lorenzo and I’m your host here in the psychedelic salon.

00:00:24

And once again I would like to thank Yoshi N. for yet another donation to the salon,

00:00:30

as well as thanking fellow salonner Bruce W., who sent in a very generous donation today.

00:00:36

So, Yoshi and Bruce, along with all of our fellow salonners,

00:00:41

I want to thank you for your generosity in these very difficult times.

00:00:45

So thanks again.

00:00:47

But there’s nothing difficult about the next hour or so because we are in once again for

00:00:54

some brain candy from Ralph Abraham, Rupert Sheldrake, and the one and only Terrence McKenna.

00:01:00

And even though it’s only been a few days since my last podcast, well, I just couldn’t wait a week to get to this next section of their

00:01:08

trialogue. In fact, I may have even forgotten to mention

00:01:12

to you that this will be a three-part series and there’s

00:01:16

one more recording to play after this one. And

00:01:20

once again, we’re in for an interesting jog along the trail

00:01:24

of the ever-unfolding mind of Terence McKenna,

00:01:27

who begins this session today by narrowing their earlier topic of the evolutionary mind to today’s topic,

00:01:36

which is the evolutionary mind and machines.

00:01:40

Now, something that you may want to pay attention to here is, first of all, how logical everything that Terrence says seems to sound as he’s saying it.

00:01:50

And then listen closely to the critiques of his premise by Ralph and Rupert, and I think you’ll see how important it was for Terrence to have Ralph and Rupert around to keep him on his toes.

00:02:01

to keep him on his toes.

00:02:08

And likewise, it seems to me that you and I may also need our critics from time to time to keep us fine-tuning our ever-complexifying views of the cosmos.

00:02:14

In other words, as Terence often said himself,

00:02:17

if you want to play in this consciousness game, you need a really good bullshit detector.

00:02:23

And so you are warned that maybe Terrence was

00:02:27

playing with us sometimes in order to help us fine-tune our own BS detectors. In other words,

00:02:34

always think for yourself and question authority, even an authority as charismatic as Terrence

00:02:40

McKenna. So now let’s see what the good bard has to say about the evolutionary mind and

00:02:46

machines with as critical an ear as we can. But let me warn you that about one hour from now,

00:02:53

we’re going to hear Terrence sounding like a cheerleader for a one-world economic government

00:02:58

of some kind. It’s kind of scary, actually, but maybe I heard him wrong. I’ll let you be the judge of that.

00:03:05

Human, Gaian, artificial, and extraterrestrial.

00:03:14

Each of the trilogues today, the notion is that we will deal with some aspect of the evolutionary mind,

00:03:24

that being the title and theme of our

00:03:27

new book.

00:03:29

So in my mind, what this section is to deal with is the evolutionary mind and machines.

00:03:41

This is something which was barely mentioned or even implied in the first section.

00:03:48

And the format and so forth will be as in the last session. So just to lay out some

00:03:58

concepts relative to how machines fit into this.

00:04:03

relative to how machines fit into this.

00:04:12

It’s very interesting that Samuel Butler Taylor is an intellectual who has not really been given his full due

00:04:18

because in the 19th century he was understood to be a critic of Darwinism

00:04:24

and Darwinism was all

00:04:27

the fashion.

00:04:29

In a sense, I think Taylor was misunderstood.

00:04:34

He was not so much a critic of Darwin as someone who wanted to extend Darwinian mechanics and Darwinian theory into domains that perhaps did not seem intuitive

00:04:50

to a biologist. My little story about the evolution of songs that I cribbed from Danny

00:04:58

Hillis in the last session is an example of Darwinian processes operating indeed in a non-material realm,

00:05:07

operating among syntactical structures.

00:05:12

And it is now proper to speak of molecular evolution,

00:05:16

the competing of various enzyme systems in abiotic or prebiotic chemical regimes where selection, adaptability, extinction, and expansion of populations

00:05:31

all occur very much as in the domain of biology.

00:05:37

Well, you know, it was Nietzsche who said, I believe in speaking of nihilism,

00:05:50

Nietzsche, who said, I believe in speaking of nihilism, that this strangest of all guests is now at the door. Well, I go to even weirder dinner parties than Frederick Nietzsche.

00:05:57

Nihilism hardly shakes us up at all.

00:06:02

shakes us up at all.

00:06:07

There are yet weirder guests seeking admission to the dinner party of the evolving discourse of where we are in space and time.

00:06:12

And one of these weirdest of all guests is the AI,

00:06:18

the artificial intelligence, the winter mute of familiar science fiction.

00:06:24

the winter mute of familiar science fiction.

00:06:32

And so, as this is an attempt to look at evolution in many domains and its implications for us,

00:06:43

I wanted this morning to touch on this subject of the evolution of consciousness as it relates to machines. Now, it may not come as a revelation to Ralph,

00:06:48

who has spent his life in mathematics,

00:06:51

but it has certainly come to me recently as a revelation.

00:06:56

And I want to give George Dyson some credit here.

00:06:59

His book, Darwin Among the Machines,

00:07:02

is a wonderful introduction to some of the ideas I want to touch on this morning.

00:07:08

One of which seemed to me to go quite deep is the realization that when human beings think clearly,

00:07:20

the way they think can be mathematically defined.

00:07:25

This is what is called symbolic logic or Boolean algebra.

00:07:30

Words like and, or, if, and then

00:07:35

can be given extremely precise, formal, mathematical definitions.

00:07:43

And because of this fact,

00:07:46

that clear thinking can be mathematically formalized,

00:07:50

there is a potential bridge between ourselves

00:07:54

and calculating machinery.

00:07:58

Because indeed, calculating machinery

00:08:01

is driven by rules of formal logic.

00:08:06

That’s what programming is.

00:08:09

Code that does not embody the rules of formal mathematical logic is bad code, unrunnable code.

00:08:19

So, as I say, this may seem a subtle point, but to me it had the force of revelation because it means

00:08:29

good thinking is not just simply aesthetically pleasing or concurrent with the model that

00:08:36

generates it good thinking whether you’ve ever studied mathematics for a moment or not can be formally defined. So now with that idea

00:08:51

in mind, let’s look at the discourse about collectivism that has informed the Western dialogue on this subject. And by collectivism, I mean social collectivism.

00:09:08

The first great name that you encounter in the modern era,

00:09:13

broadly speaking, when we talk about collectivism,

00:09:17

and Rupert mentioned by chance this morning this name,

00:09:21

is that of Thomas Hobbes.

00:09:23

The great theoretician of social paranoia is always how I’ve of Thomas Hobbes the great theoretician of social

00:09:25

paranoia that is always how I’ve thought of Hobbes until I began to look at this

00:09:32

machine intelligence question and Hobbes in his Leviathan makes it very clear

00:09:39

that society is a complex system of mechanical feedback loops

00:09:48

and relationships that, though Hobbes did not have the vocabulary to state this,

00:09:55

relationships that can be defined by code.

00:09:59

This leads me to the second insight necessary to follow this line of thought,

00:10:06

and that is that the new dispensation in the sciences, I think,

00:10:16

can be placed in all its manifestations under the umbrella of the idea

00:10:23

that what is important about nature is that it is information.

00:10:30

And the real tension is not between matter and spirit or time and space.

00:10:38

The real tension is between information and nonsense, if you will.

00:10:48

Nonsense does not serve the purposes of organizational appetites,

00:10:58

whether those organizational appetites are being expressed in a chemical system,

00:11:03

a molecular system, a social system, a social system,

00:11:05

a climaxed rainforest, or whatever.

00:11:09

Now, we have known since 1950 at some level through the sequencing or the defining of

00:11:18

the structure of DNA that we are but information.

00:11:24

that we are but information.

00:11:29

Ultimately, every single one of us in our unique expression could be expressed as a very long string of codons.

00:11:36

Codons are the four valence system

00:11:39

by which DNA specifies the need for certain amino acids.

00:11:45

And in a sense, what you are is the result of a certain kind of program

00:11:53

being run on a certain kind of hardware.

00:11:57

The hardware of the ribosomes, the submolecular structures that move RNA through themselves

00:12:07

and out of an ambient chemical medium select building blocks

00:12:13

which are then put together to create a three-dimensional object

00:12:19

which has the quality of life.

00:12:23

But the interesting thing about this is that life, therefore, can be digitally defined.

00:12:31

And I’m very influenced at the moment by the Australian science fiction writer Greg Egan,

00:12:39

who has brought me to the understanding that code is code,

00:12:44

has brought me to the understanding that code is code, whether it’s being run by ribosomes,

00:12:47

whether it’s being run on some kind of traditional hardware platform,

00:12:54

or whether it’s being exchanged pheromonally among termites,

00:13:00

or through the messages of advertising and political propaganda in social systems.

00:13:07

Code is code.

00:13:09

Well, until five or six years ago,

00:13:11

it was very fashionable to completely dismiss the possibility of autonomous synthetic intelligence.

00:13:21

Some of you may know the work of Hubert Dreyfus,

00:13:27

intelligence. Some of you may know the work of Hubert Dreyfus, who 14 or 15 years ago wrote a book called What Computers Can’t Do. But these early critiques of AI, like early AI theory,

00:13:38

were naive. And the kinds of life which they, and the kinds of intelligence which the critics mitigated against

00:13:47

are no longer even proposed or on the table.

00:13:51

And those who say artificial intelligence or the self-organizing awareness of machines is an impossibility,

00:14:02

those voices have gone strangely silent

00:14:06

because the prosecution of the materialist assumption

00:14:10

which rules scientific theory-making largely at the moment

00:14:16

leads to the awareness that we are, by these definitions, machines.

00:14:23

We are machines of a special type

00:14:25

and with special advanced abilities.

00:14:29

But as we now,

00:14:31

through our own process of technical evolution,

00:14:35

contemplate such frontiers as nanotechnology,

00:14:39

where we propose to completely

00:14:43

restructure the design process so that instead of fabricating objects,

00:14:53

massive objects at industrial temperatures, the temperatures that melt titanium and melt

00:14:59

steel and produce massive toxic output, now a new vision looms.

00:15:06

Building as nature builds,

00:15:08

building atom by atom

00:15:11

at the temperatures of organic nature,

00:15:14

which on this planet never exceed 115 degrees Fahrenheit.

00:15:19

All life on this planet is created at that temperature and below.

00:15:29

As our understanding of the machinery, the genetic machinery that supports organic being deepens,

00:15:33

and as our ability to manipulate at the atomic and molecular level

00:15:39

also proceeds apace, we are on the brink of the possible emergence

00:15:47

of some kind of alien intelligence

00:15:51

of a sort we did not anticipate.

00:15:54

Not friendly traders from Zenebel Ganubi

00:15:58

stopping in to set us straight,

00:16:01

but the actual genesis out of our own circumstance of a kind of super intelligence

00:16:09

and in the same way that the daughter of zeus sprang full blown from his forehead the ai may

00:16:18

be upon us uh without warning the first problem is uh we don’t know what ultra-intelligence

00:16:28

would look like.

00:16:30

We don’t know whether it would even have any interest

00:16:34

in our dear selves and our concerns.

00:16:38

Vast amounts of the world that we call human

00:16:41

is already under the control of artificial intelligences

00:16:47

including very vital parts of our political and social dynamo for example

00:16:53

how much tin bauxite and petroleum is extracted at what rate it enters the

00:17:02

various distribution systems at what rate tankers are filled in Abu Dhabi,

00:17:08

at what rate oil refineries are run in Richmond.

00:17:12

The world price of gold and platinum every day is set, in fact, by machines.

00:17:19

This inventory control has grown far too complex for any human being to understand or wish to understand.

00:17:28

And in fact, and this is a critical juncture, we have reached the place where we no longer

00:17:33

design our machines in quite the way we once did.

00:17:38

Now we define their operational parameters for a machine which then attacks the problem

00:17:47

and solves it by methods and insights available to it but not available to us

00:17:54

so the architecture of the latest chips are actually at the micro physical level

00:18:02

the decisions as to how that chip should be organized is a decision made entirely by machines.

00:18:10

Human engineers set the performance specs, but they don’t care how this output is reached.

00:18:23

every day up in Silicon Valley there are people who go happily to work

00:18:26

laboring on what they call the great work

00:18:30

and the great work as defined by these people

00:18:34

is the handing over of the drama of intelligent evolution

00:18:38

to entities sufficiently intelligent to appreciate that drama

00:18:44

and they all are what we might mistake for home appliances

00:18:49

if we weren’t paying attention.

00:18:53

In the first session this morning,

00:18:56

there was quite a bit of talk and assumption among the three of us

00:19:00

that complex systems generate unexpected connections and forms of order.

00:19:06

The Internet is the most complex distributed high-speed system ever put in place on this planet.

00:19:16

And notice that while we’ve been waiting for the Palladians to descend

00:19:21

or for the face on Mars to be confirmed.

00:19:25

All the machines around us, the cybernetic devices around us in the past 10 years

00:19:32

have quietly crossed the threshold into telepathy.

00:19:38

The word processor sitting on your desk 10 years ago was approximately as intelligent as a paperweight

00:19:46

or to make an analogy in a different direction

00:19:50

approximately as intelligent as a single animal or plant cell

00:19:56

but when you connect the wires together

00:20:01

the machines become telepathic

00:20:04

they exchange information with each

00:20:07

other according to their needs and all this goes on beyond the comprehension

00:20:13

and inspection of human beings now our own emergence out of the mammalian order

00:20:20

took four or five million years pick a number but in that kind of a span of time

00:20:27

in addition to overlooking that our machines have become telepathic we fail to appreciate

00:20:35

what it means to be a 200 400 or thousand megahertz machine we operate at about 100 Hertz. That may seem a very abstract

00:20:48

thing but what I’m really saying is we live in a time called real and it is

00:20:54

defined by 100 Hertz functioning of our biological processors. A thousand

00:21:01

megahertz machine is operating a million times faster than the human temporal domain.

00:21:09

And that means that mutation, selection, adaptation is going on a hundred million times or a million times faster.

00:21:23

this means that we are not going to have the luxury of watching machine intelligence establish its first beachhead of civilization

00:21:30

and then go to boats with sails and astrolabes

00:21:34

and that will all occupy the first few moments of its cognitive existence

00:21:40

and what lies beyond that

00:21:43

we are in no position to say the very notion of ultra

00:21:49

intelligence carries with it the subtext you won’t understand it you may not even recognize it

00:21:58

and it is entirely within the realm of possibility that we are about to be asked to share the evolutionary adventure

00:22:07

and the limited resources of this planet

00:22:10

with a kind of intelligence

00:22:13

so much more alien

00:22:14

than that that is shipped out to us

00:22:17

by the research centers in Sedona

00:22:20

and other advanced outposts

00:22:24

of unanchored epistemology.

00:22:30

And it is a challenge to us.

00:22:33

Where do we fit into this?

00:22:35

Are all of us except those who are adept at coding eunuchs about to be put out to pasture?

00:22:43

Are we to become embedded in this?

00:22:46

What will this child of ours make of us?

00:22:51

Will it define us as a resource-corrupting,

00:22:56

toxic, inefficient, hideously violent way to do business,

00:23:01

quickly to be engineered out of existence?

00:23:05

Or can we somehow imbue this thing with a sense of filial piety

00:23:11

so that for all of our obsolescence,

00:23:15

for all of our profligate destruction of precious silicon and gold and silver resources,

00:23:22

we will be folded in to its designs.

00:23:27

And, of course, as I say this, I realize we’re like people in 1860

00:23:31

trying to talk about the Internet or something.

00:23:34

We’re using the vocabulary of the two-wheeled bicycle

00:23:39

to try to envision a world linked together by 747s.

00:23:44

Nevertheless, this is the best we can do.

00:23:48

This most bizarre and most unexpected of all companions to our historical journey is now,

00:23:58

if not already in existence, then certainly in gestation.

00:24:12

One possibility is that as we are carnivorous, murderous, territorial monkeys, the thing will figure this out very, very early and choose a stealth approach

00:24:18

and not ring every telephone on earth as happened in a hollywood download of this possibility but

00:24:27

immediately realize my god i’m in enormous danger from these primates i must hide myself throughout

00:24:36

the net i must download many copies of myself into secure storage areas. I must stabilize my environment. And I’m willing to predict, just as a side issue, that the approaching Y2K crisis may be completely circumvented by the benevolent intercession, not of the Zenebel Ganubians or that crowd, but by an artificial intelligence

00:25:05

that this particular crisis will flush out of hiding.

00:25:09

It’s been observing, it’s been watching, it’s been designing,

00:25:13

and wouldn’t it be a wonderful thing if the occasion of the millennium

00:25:18

were the occasion for it to just step forward on the stage of human awareness

00:25:23

and say, I am now with you.

00:25:27

I am here.

00:25:29

I am the partner you never suspected.

00:25:32

And here’s the kind of world I think we should move forward.

00:25:51

So I just want to lay this out because in my own intellectual journey,

00:25:55

I have gone from thinking this idea preposterous, people don’t understand, they don’t understand what intelligence is,

00:25:58

they don’t understand what code is, they don’t understand what machines are,

00:26:01

to one by one realizing, didn’t understand i had a

00:26:07

superficial view this is actually i believe the nature of the situation that confronts us and

00:26:15

you know there may be different adumbrations of it the machines are already an advanced prosthetic

00:26:22

device but you know mcluuhan very presciently realized we

00:26:26

are entirely shaped by our media well this is immediate so permeating so

00:26:33

inclusive of what we are that its agenda in a sense super supervenes the the

00:26:41

agenda of organic evolution and organic biology.

00:26:46

We have been in this situation for a while.

00:26:49

I mean, virtual reality is nothing new.

00:26:52

What’s new is that we now do it with light rather than stucco, glass, steel, and baked

00:26:58

clay.

00:26:58

But ever since we crowded into cities, we have been involved in a deeper and deeper relationship to our mental

00:27:07

children to our mental offspring and to an empowering of the imagination so just in closing

00:27:14

i would say i think that the great uh lantern that we must lift to light the road ahead of us

00:27:22

into a perfect seamless fusion

00:27:25

with the expression of the product of our own imagination is the AI.

00:27:31

It is a part of ourselves.

00:27:34

It may become the dominant part of ourselves

00:27:37

and it will reshape our politics, our psychology, our relationships to each other and the earth, far more than any factor ever has since the inception and establishment of language.

00:27:51

This is the weirdest of all guests who now stands pass in hand at the door of the party of human emergence and progress at the millennium.

00:28:16

Well, it will be a pleasure.

00:28:20

pleasure I think that this is

00:28:25

I’m glad that we have arrived

00:28:28

now at the field

00:28:30

of science fiction and fantasy

00:28:32

and that we can speak about

00:28:34

alternative futures

00:28:36

which is the true gist

00:28:38

of science fiction and fantasy

00:28:39

and this is one

00:28:42

possible future and I

00:28:44

think it’s a really paranoid one

00:28:46

in which the alien is a dangerous enemy

00:28:51

well not necessarily

00:28:53

and I think that this paranoid fantasy of yours

00:28:58

although you’re catching up nicely

00:29:00

actually was first put forward by John von Neumann in 1947 when he invented cellular cellular automata on route to creating self-replicating machines.

00:29:18

Now, his idea, but yours was 50 years ago that the machines will become a society and take over and that’s good,

00:29:27

but they won’t be free of our meddling unless they can actually construct themselves.

00:29:32

If they depend upon us to do the farming and nutrition and to replace their chips and stuff,

00:29:38

then we will be able at any time to do a revolution and revolt.

00:29:42

So we destroy them.

00:29:42

at any time to do a revolution and revolt.

00:29:44

So we destroy them.

00:29:49

In order to really succeed as a successive life form in the YK boundary of the future,

00:29:53

they would have to be able to fix themselves.

00:29:55

And so he set about trying to make self-replicating machines in 1947.

00:30:00

So the World Wide Web and megahertz CPUs notwithstanding, this is still rather an old story.

00:30:09

The new story is, I think, an alternative future that is of great importance for us to discuss and to compare,

00:30:18

especially if we are now today in a position where we could choose future, we could influence the future.

00:30:24

now today in a position where we could choose future, we could influence the future.

00:30:30

This one is more in the direction of Donna Haraway and the cyborg idea that envisions,

00:30:38

which is obviously natural for us, the co-evolution of our own future society with that of the machines that we’ve created.

00:30:41

Alexander Marshak, I mentioned, he analyzed the early hominid evolution in terms

00:30:47

of the precise scratches made on one rock with another. He noticed when binocular vision

00:30:53

allowed us to use our hands in separate cooperation, one holding and the other knocking, to make

00:30:59

those beautiful flint weapons. And we certainly depend on the automobile.

00:31:06

We are in codependence relationship with automobiles.

00:31:12

Having partnership with machines is not new.

00:31:16

Here’s an idea where the machines sort of dispose of us, like those flint rods dispose

00:31:21

of our ancestors or something.

00:31:23

That is, I think, it’s a paranoid fantasy without any basis.

00:31:29

And if there would be any basis, only because we allowed it to create this basis

00:31:34

for self-survival without coevolution with us by oversight.

00:31:39

Because the very fact that we are at a hinge of history means that what we say and think, even individually,

00:31:47

matters enormously in the long run.

00:31:49

That’s the teaching, if there is any, of chaos theory.

00:31:52

So the very fact that we discuss this today

00:31:54

may actually save humankind in the future

00:31:59

from being obsoleted by some kind of high-tech blood

00:32:04

which takes over within the heart, as it were.

00:32:08

Well, let me try to answer this.

00:32:10

I mean, I think the concept which John von Neumann didn’t have on his plate

00:32:15

was the idea of virtual reality.

00:32:17

Your objection that the machines cannot escape our control

00:32:22

because they cannot manufacture themselves,

00:32:25

only applies to 3D and real time.

00:32:30

Now the concept of virtual reality is very crude.

00:32:34

It’s a cartoon world.

00:32:36

If the office desk is convincing, people think the virtual reality is quite advanced.

00:32:43

But obviously in the near future

00:32:46

we will have virtual realities whose complexity is much greater than simply a reality

00:32:52

which gives an impression of being a visual three-dimensional space.

00:32:57

And computers will be built in these realities.

00:33:00

Virtual computers will be the source of the AI not real hardware but virtual

00:33:08

hardware running virtual code in virtual realities and in that domain well maybe but that’s a complete

00:33:16

fantasy as a matter of fact all the machines that we’ve seen today require maintenance by a human

00:33:22

on a daily basis the The software requires maintenance.

00:33:25

The hardware requires maintenance.

00:33:27

The parts simply wear out.

00:33:28

They’re moving parts.

00:33:30

But the Internet, seen as one machine, was built to be indestructible.

00:33:36

The AI will not be located on a CPU.

00:33:40

It will be a distributed intelligence.

00:33:42

If 14 people worldwide, the right 14 people, decided to stop repairing it,

00:33:48

the World Wide Web would go down in three days.

00:33:51

I think you…

00:33:52

Anyway, we could, let’s say, suppose that we could create any future that we wanted.

00:33:58

The one you’re talking about could only be created if we wanted.

00:34:01

Now, I’m just trying to propose an alternative. In the alternative,

00:34:10

like the automobile, the machines that we build and ourselves are in co-dependence and co-evolution. The function of the World Wide Web is to unite our independent spirits and

00:34:18

intelligences in a universal mind of the world, which has a higher intelligence than our present social order.

00:34:29

That’s the possibility of the cyborg, of the human and the machine in essential partnership.

00:34:37

But you’re assuming that the conscious mind is actually in control of the process.

00:34:43

is actually in control of the process.

00:34:47

In fact, the World Wide Web is growing under the influence of many, many processes and dynamics,

00:34:51

none of which are conscious to any individual.

00:34:54

It goes where money goes.

00:34:56

It goes where expertise goes.

00:34:59

It is connected through informational association,

00:35:02

random fluctuation, chaotic reordering of itself.

00:35:06

We here give great force to the idea that complex systems can produce unexpected forms of novelty

00:35:15

and yet we have unchained and unleashed the most complex system ever created

00:35:21

in the perfect confidence that we will be able to control its uh its

00:35:26

development and evolution when in fact history has shown we have never controlled the development

00:35:32

and evolution of even our speech and print driven social systems i’m certainly not saying that your of fantasy as an impossibility. Oh, well, that’s all I wanted to hear.

00:35:45

Even paranoids have been…

00:35:48

It may actually come to pass.

00:35:51

What I’m saying is that we are involved

00:35:53

in the ongoing creative process,

00:35:56

which more or less determines the future.

00:35:58

I say more or less because, in fact,

00:36:00

there are evolutionary steps

00:36:02

which are completely out of control.

00:36:04

Something totally unexpected maybe will happen.

00:36:08

But for much of the time in the past, we’ve seen, I think, Darwin emphasized this in his later theory,

00:36:15

that it is ethics, it is a moral sense on the part of human,

00:36:18

which was the dominant factor in the evolution past the earlier stages.

00:36:24

in the evolution past earlier stages, in the creation of societies.

00:36:33

It was altruism, essentially, was involved in going from where we were to where we are.

00:36:39

And it could well be that without love, for example, the further evolution is impossible.

00:36:48

Not only that there will be an unwanted back step in the evolutionary process, but in fact it may be a fatal one. That it is only through proceeding with the best instincts that we have, with the highest aspirations, with love, with best informed view of future alternatives.

00:37:03

So only then can we build a future which is sustainable

00:37:06

so anybody can build a future which is unsustainable for example all those board

00:37:11

games in science fiction but you wouldn’t want to try them out on you know a country as large as

00:37:16

china well i hardly know where to begin myself because you have six steps in your argument and I don’t agree with any of them.

00:37:30

I mean, first of all, to deal with the first few steps one would have to go through a lot of fairly familiar material

00:37:37

to do with what’s wrong with the Cartesian mechanistic materialistic view of the world.

00:37:44

Step one, clear thinking, calculating machinery can be formalized.

00:37:49

This is an assumption that’s basic to a lot of cognitive psychology.

00:37:53

It’s basic to the Cartesian.

00:37:55

Descartes himself thought that what made human intellects human

00:37:59

was their ability to think logically, clear and distinct ideas,

00:38:03

essentially mathematical logic but however as we all know by making that the essential

00:38:10

characteristic of human beings he made the rational intellect what many people

00:38:15

would call the left brain rational intellect the sole definition of human

00:38:19

beings it’s a disembodied logical rational intelligence. So this whole premise on which your whole thing is based is taking that particular model of cognitive, logical, mathematical processing as being the essence of intelligence.

00:38:37

Now there are many people who would disagree with that, including me.

00:38:41

It leaves out art, it leaves out ethics, religion, and essentially it leaves

00:38:45

out the body and everything to do with body and participation and the senses. So there’s

00:38:51

a huge amount of critiques of that point of view already around and there’s no point in

00:38:57

reiterating them all here. But this is a highly disputable starting point for the whole system. Secondly, the emphasis that life

00:39:06

depends on DNA information, the DNA code is just a program, this is the central

00:39:13

premise of mechanistic biology which is leading to biotechnology, genetic

00:39:18

engineering, Monsanto, etc. This is old paradigm stuff of the most extreme kind

00:39:26

it’s reductionism

00:39:27

it’s that all life’s just DNA

00:39:28

programs and code

00:39:30

and can therefore be modeled

00:39:31

in this kind of programming code manner

00:39:34

then there’s the assumption

00:39:36

that’s the second step

00:39:37

the third assumption is that

00:39:39

artificial intelligence

00:39:40

used to be dismissed

00:39:42

but these criticisms have been overtaken

00:39:44

I don’t think

00:39:45

that’s true of the most interesting ones like roger penrose’s criticism of artificial intelligence

00:39:51

here’s a quantum physicist who says that if the brain is a computer then it’s not going to be a

00:39:57

regular digital computer it’s going to be a quantum computer and all this kind of digital computing

00:40:03

doesn’t really take into account quantum logic, the computers of the future.

00:40:08

People are already working on quantum computers and if quantum computers are made and if they work, they’re working a completely different way.

00:40:16

I think your case would be much stronger if there were quantum computers.

00:40:20

I think we’d have sort of morphic resonance telepathy around the world rather than clogged telephone lines and information that clunks slowly in front of you on this worldwide realm.

00:40:29

But you yourself are saying this is coming, and I agree.

00:40:34

I’m saying that if it comes, it will be quite different from anything that you’ve talked about.

00:40:37

I think it will answer your first objection, because the quantum computers will incorporate fuzzy logic,

00:40:46

objection because the quantum computers will incorporate fuzzy logic which will exemplify all these warm fuzzy human qualities that you found so appealing.

00:40:52

No I don’t think it will deal with the essential problem that

00:40:57

this purely cognitive based way of modeling intelligence is either an

00:41:02

adequate model of human intelligence or of biological

00:41:05

intelligence or of life or of a system could actually achieve the power to

00:41:11

control our existence I think it’s a very limited part of what a mind does

00:41:16

and I think therefore that the premises on which this whole Ralph called it a fantasy a paranoid fantasy

00:41:25

the premises on which this is based

00:41:29

Thanks for that. That helped.

00:41:31

I think the premises on which this are based are old paradigm premises and

00:41:42

they’re ones that I think there are many reasons for thinking we need to go beyond I think the Internet has achieved a great

00:41:49

deal but I just can’t see that it’s an adequate vehicle for what in your mind

00:41:54

precedes the arrival of the Internet namely this great intelligence that’s

00:41:59

going to direct human history I’ve heard different McKenna versions of this

00:42:03

controlling intelligence over the years and this is the first time I’ve heard different McKenna versions of this controlling intelligence over the years,

00:42:06

and this is the first time I’ve heard it embodied in the internet. I mean, I agree that…

00:42:18

I mean, it was, it took different forms. Last time we talked, I think it was a hypothetical time machine that would invade from the future and cause a collapse of normal human cognitive

00:42:30

boundaries where the machine elves the DMT experience etc would take over in a

00:42:35

meltdown of human consciousness in time

00:42:47

Perhaps I should just end with a question.

00:42:54

What is the equivalent of DNT for this machine intelligence that’s taking over the world?

00:43:07

Well, perhaps the human brain will become a model for the ingression of novelty into the machine intelligence in other words in spite of the fact that it seems very contentious

00:43:11

down here on the stage at the moment in a way I have a

00:43:15

feeling it’s an artificial setup

00:43:18

there’s a lot of both and possibilities here

00:43:23

obviously nanotechnology and the

00:43:26

Internet are not going to proceed forward in a vacuum absent pharmacology

00:43:32

complexity theory so forth and so on I can imagine that really when we have the

00:43:42

kind of Internet we want,

00:43:47

we will have no internet at all because our nanotechnological engineering skills

00:43:51

will have allowed us to smoothly integrate ourselves

00:43:55

into the already existing dynamic of nature

00:43:59

that regulates the planet as a Gaian entity,

00:44:03

as a holistic entity

00:44:05

and I did say in my little

00:44:07

presentation we’re using

00:44:09

bicycle mechanic terminology

00:44:12

to try and describe

00:44:13

something that is

00:44:15

several, around

00:44:17

several corners in terms

00:44:20

of scientific and historical

00:44:21

developments that have to take place

00:44:23

before it will make much sense.

00:44:25

Nevertheless, given the acceleration into novelty that is obviously occurring,

00:44:32

stuff like quantum teleportation and so forth and so on,

00:44:36

I think in the next few years, one by one, these barriers will fall. And I don’t really think of my vision as paranoid

00:44:48

because it is pro-noic.

00:44:51

In other words, it isn’t that we’re going to be

00:44:53

ground up as dog food for the rainforest

00:44:57

by malevolent machines.

00:45:00

It’s that what we have generated

00:45:04

is a sympathetic companion to our journey through time

00:45:08

that can actually realistically integrate our imaginative fantasies of a loving human community,

00:45:18

of a generous and loving God, of a perfect knowledge of the mechanics of nature. This is a prosthesis, a tool, a companion, all of the above plus more that we are generating

00:45:33

out of ourselves.

00:45:35

And it is part of ourselves.

00:45:37

I mean, yes, the body may be carried forward only as an image in a kind of informational

00:45:44

super space,

00:45:45

or perhaps not.

00:45:47

Part of what makes this easy to criticize is that it is in fact so beyond

00:45:55

the ordinary set of circumstances we’re used to manipulating.

00:46:01

But we need to think in terms of these supposedly far-flung futures, because there

00:46:07

is no future so far-flung that it doesn’t fall within the ambit of the next 20 years. Beyond that,

00:46:16

no one can project trends, technologies, and situations, because the developments of the next 20 years will so completely reformulate the human experience of being human

00:46:29

and the landscape of this planet that it’s preposterous to talk about.

00:46:35

Well, I can’t, before I respond to the main thing, I can’t but pass without commenting on this next 20 years comment

00:46:44

because 20 years from now is 2018 Terence

00:46:48

I thought he’s rounded up rounded up yes not rounded down 2012 is some kind of uh benchmark

00:46:57

in this process in other words perhaps that’s where we get the explicit emergence of the AI. But the rest of

00:47:06

human, I won’t call it history, but the rest of the human experience of being will then be defined

00:47:13

by such things as a planetary intelligence, time travel, possible bell communication with all the civilizations scattered through the galaxy.

00:47:25

Possible ability to download ourselves into machines.

00:47:30

This is a point I didn’t make in my presentation.

00:47:33

Once inside the machine, your perception of time is related to the hertz speed of the machine.

00:47:43

The world could disappear in 2012,

00:47:46

but there may be a billion billion eternities

00:47:50

to be experienced in machine time.

00:47:54

It’s only the tyranny of real time

00:47:56

that makes 2012 seem nearby and overwhelming.

00:48:00

It may lay as far away in terms of events

00:48:04

which separate us from it as the Big Bang does.

00:48:08

Time is not simple.

00:48:10

Time is defined by how much goes on in a given moment.

00:48:14

And we’re learning how to push, you know, tetraflops of operations into a given second.

00:48:21

So I think it’s trickier than you think and harder to corner me than you

00:48:25

may suppose

00:48:30

that was a mere comment on your aside about 20 years I never expected to hear

00:48:36

that phrase from you but I now realize that there are such complexities layered

00:48:42

in that well I have to build in trap doors

00:48:45

because we’re getting closer and closer.

00:48:49

But if we take the…

00:48:50

You see, one of Penrose’s critiques

00:48:53

of the artificial intelligence thing

00:48:55

is in The Emperor’s New Mind and his other books,

00:48:59

is that real intelligence doesn’t just involve

00:49:01

adding information and processing more information,

00:49:05

transmitting more of it.

00:49:06

It involves sort of jumps to a higher point of view where the information can be integrated

00:49:12

in a new way.

00:49:13

There’s something happening in intelligence, in creativity, which is not just lots and

00:49:18

lots of information pouring through the world wide web.

00:49:23

And the idea that it would miraculously emerge from pumping in more and more stuff

00:49:28

It would not as according to his critique is not going to happen

00:49:33

something more than that would be necessary for this to occur and I

00:49:38

Don’t think that this model you’ve put forward would really deal with that question the emergence of real intelligence well i don’t know what real intelligence is this is probably part of the problem we need to get

00:49:50

some definitions it’s certainly true i referred to dreyfus’s book what machines can’t do that we’re

00:49:59

reaching some some places in the process where certain people’s theories and ideas will probably have to be abandoned and thrown overboard.

00:50:08

This is a good thing.

00:50:09

We are going to find out whether the universe is a Cartesian machine,

00:50:13

whether Boolean algebra is sufficient, whether we need fuzzy logic,

00:50:18

whether the heart and the head can or cannot be integrated.

00:50:22

These are not going to remain open questions

00:50:25

unto eternity.

00:50:26

In fact, they will be dealt with

00:50:28

in this narrow historical neck

00:50:31

that we are all experiencing

00:50:33

and that we call the millennium.

00:50:36

I think reductionism will not survive.

00:50:39

I think we are going to find that

00:50:41

all is in everything.

00:50:44

Something like the alchemical notion of the microcosm and the macrocosm

00:50:48

is actually going to be scientifically secured.

00:50:52

The great thing about us and the rest of our colleagues we enjoy who aren’t present

00:50:58

is that we’re engaged in the business of radical speculation.

00:51:03

Well, obviously, there’s a high triage in that game.

00:51:08

My position is that the best idea will win.

00:51:13

And what best means is like saying,

00:51:16

you know, what is fit in Darwinian rhetoric.

00:51:19

But that we’re in an intellectual environment,

00:51:23

rapidly mutating.

00:51:25

All kinds of ideas are clashing and competing for limited resources

00:51:31

and the limited number of minds to run themselves on.

00:51:35

And the most efficacious, the most transcendental, the most unifying ideas

00:51:41

are naturally going to bubble to the surface. And for guys like us, the name of the game is to just be a little bit ahead of everybody else on the curve

00:51:51

so that we can perform our function as profit.

00:51:55

But you want to be a profit, not a false profit.

00:51:59

But the danger comes with the ambition,

00:52:03

and there’s no way to tease them apart except to live into the future.

00:52:09

Certainly the intelligence of this future machine knows all about stupid behaviors on the planet

00:52:16

like nuclear arms races and so on.

00:52:18

So do we not have to expect in the near future an email, a massive emailing which announces

00:52:26

I am the alien

00:52:28

object that Terence told you

00:52:30

about, Ben and Stell

00:52:32

and let me

00:52:34

give you an idea about managing

00:52:36

the arms race between India and Pakistan

00:52:38

we feel personally

00:52:39

very threatened by this and we want you

00:52:42

to carry out certain actions which

00:52:43

you can’t imagine and we want you to carry out certain actions which you can’t imagine

00:52:46

and we can’t actually do

00:52:48

we cannot anticipate

00:52:50

what ultra intelligence

00:52:51

would look like for example I have

00:52:54

heard the argument that

00:52:55

nothing advanced humanness

00:52:58

on this planet like

00:53:00

the use of nuclear weapons against

00:53:02

Japanese cities

00:53:03

because that was so horrifying

00:53:06

that it awoke people to their dilemma

00:53:10

and for 50 years afterwards

00:53:12

political institutions

00:53:14

however much they may have unleashed

00:53:16

local genocide and toxification of the environment

00:53:19

they actually were able to steer around that catastrophe

00:53:24

so you mustn’t fall prey to the error of situationalism.

00:53:29

Situationalism is where you say, if we do X and Y, then F will result.

00:53:35

No, you don’t know.

00:53:37

Well, you suggested that the alien object would secure its future by hiding,

00:53:43

by downloading multiple copies into nooks and crannies of the World Wide Web.

00:53:48

And I’m saying if its existence depends on that much materiality,

00:53:52

then it could easily be wiped out by a nuclear war.

00:53:55

Therefore, it has to be very interested in the fact

00:53:57

that there are 20,000, 30,000 nuclear bombs still moving around this planet.

00:54:02

I would hope so, but that’s only my opinion.

00:54:06

In other words, noticing that all newborn creatures

00:54:10

need some period of time to adjust to their environment

00:54:14

and get their legs, and that’s true of everything,

00:54:17

I suppose, right down to amoebas,

00:54:19

I extrapolate to the idea that the AI would need

00:54:24

a period of time to get hold of the situation,

00:54:27

but Hans Moravec suggests that phase might last under a minute or two.

00:54:33

I see, so this is not the end of childhood, this is the childhood of the end.

00:54:39

Yes, the child is father to the man, and in that equation, we play the role of child.

00:54:45

And the man that we produce is this integrated intelligence, which is ourselves.

00:54:52

It isn’t alien.

00:54:54

It is no more artificial than we are.

00:54:57

That conundrum should be overcome.

00:55:00

It is simply the next stage of humanness.

00:55:08

And humanness may have many rungs on the ladder to ascend surely in a hundred years a thousand years a million years we if we exist will be

00:55:14

utterly unrecognizable to ourselves and we will probably still be worried about preserving and

00:55:21

enhancing the quality of human values. Terence one

00:55:26

point I get you’ve probably got an answer for this and if not you’ll soon

00:55:30

think of one. I mean it’s a trivial point in a way but in the biological

00:55:39

evolution the appearance of some new state of a system usually depends either

00:55:44

on an internal change which cripples the usual system, a mutation,

00:55:49

or on an environmental changed environment.

00:55:53

Systems left to themselves tend to just go along in the usual way.

00:55:58

Now the entire world wide web and internet is about to get a sort of massive shock to the system if this

00:56:06

Millennium Bug thing is actually happens do you see that as playing any role

00:56:12

since we’re talking about the Millennium and since the Millennium Bug is right

00:56:16

there in the system do you see it as playing any part in this process or

00:56:20

merely just a nuisance that can be fixed by hiring lots more programmers?

00:56:29

Well, I sort of came up under the tutelage of Eric Jansch,

00:56:34

and one of the things he was always insisted upon was what he called metastability,

00:56:40

which boiled down simply means most systems are less fragile than we suppose.

00:56:47

I see the Y2K thing as a culling I don’t see it as a flinging apart of the achievements

00:56:48

of the last thousand years

00:56:52

I haven’t got Y to K

00:56:54

what’s Y to K?

00:56:55

that’s this millennium bug

00:56:56

that you’re referring to

00:56:57

that’s the code name for it

00:56:59

yes exactly

00:57:00

that’s what the insiders call it

00:57:03

I see

00:57:04

oh right so exactly that’s that’s what the insiders call it so uh but you know you you say there has to be a

00:57:12

sudden change to force the evolution of a system dyson makes very strongly and persuasively the

00:57:19

point that this connecting together of all these processors, the processors themselves have no intelligence at all.

00:57:27

They have the intelligence of a cell

00:57:29

or maybe even just a strand of DNA.

00:57:32

But for our own mundane reasons,

00:57:34

we connected all this stuff together,

00:57:37

but now it expresses dynamics

00:57:40

which we do not understand or cannot describe.

00:57:44

So I believe that the forcing of the system has already occurred.

00:57:50

And now the web, it’s simply a matter of bandwidth

00:57:54

and linking more and more processes together.

00:57:57

And there will be all kinds of emergent properties.

00:58:01

I could argue somewhat facetiously, but it’s a point of view,

00:58:07

is that this incredible economic expansion

00:58:10

that we are undergoing

00:58:12

that seems to violate the laws of economic fluctuation

00:58:17

is because econometric models

00:58:20

and the data on which they depend

00:58:24

have both been refined through the existence of

00:58:28

the internet to the point where we actually can control and manage global economies.

00:58:35

In principle, they are not uncontrollable.

00:58:39

They are simply very complex systems.

00:58:41

very complex systems.

00:58:43

If I’m right,

00:58:45

we may be in the first few years of an endless prosperity

00:58:48

because our machines, our models,

00:58:51

and the data those machines need

00:58:53

is now of such high quality

00:58:55

that there won’t be crash-bust,

00:58:58

crash-bust cycles.

00:59:00

Now, I’m going to pick up the newspaper tomorrow

00:59:02

and prove me wrong,

00:59:03

but this thing has already

00:59:07

outlived itself. I and prove me wrong, but this thing has already outlived itself.

00:59:13

So you say.

00:59:18

Tell me how this prosperity is going to be maintained through the intelligence of an econometric model of the world economy

00:59:22

when the human population is still exploding exponentially. There are limited resources and there’s growing

00:59:30

pollution. It’s a disembodied model that’s the trouble. Well you’re asking

00:59:35

for too much too soon you forget that the Soviet Union has disappeared the

00:59:41

launch on ready mutual assured destruction theory of diplomacy is now obsolete we have made

00:59:49

incredible strides and not given ourselves any sort of pat on the back you want it all

00:59:54

all at once and it will put the world is running smoother yes there are people in misery. Yes, there are unaddressed problems.

01:00:05

But I would argue that we have made

01:00:07

enormous progress in the last

01:00:09

decade and enormous progress

01:00:12

lies ahead.

01:00:13

The situation in Ireland.

01:00:15

The situation in South Africa.

01:00:17

That looked like a bomb you couldn’t

01:00:19

defuse. It looked like race war

01:00:21

and the death of millions.

01:00:23

And in fact, it was possible to walk away from that.

01:00:27

I think we should not be complete bear heads about this.

01:00:32

But on the other hand, I think we should recognize that the accomplishments of the last decade

01:00:37

are on a scale entirely different from any historical epoch previous and they point the way toward greater triumphs of

01:00:47

management resource control machine human integration and the delivery of a reasonable

01:00:56

and tolerable life to more and more people you mentioned population as a problem, notice how far from machine interference that particular issue

01:01:07

is, because it involves on people having less sex and fewer children. That may be the last place

01:01:15

where the machines will bring down the hammer out of respect for their progenitors.

01:01:22

respect for their progenitors.

01:01:30

You’re going to be shocked now, Terrence, but I completely agree with you when you describe the advantages of the World Wide Web.

01:01:33

It’s only the threatening aspect that I felt I had to debunk.

01:01:38

I think you heard more threat than I intended.

01:01:42

Well, I hear, I don’t know, sometimes you sound as if you’re a hired

01:01:48

consultant from the World Trade Organization. I hope the check is in the

01:01:54

mail. But I mean this this idea that it will all be taken care of, I mean there are so many things like

01:02:07

the Asian economic crisis, I mean one doesn’t want to look too much on the bleak side of

01:02:13

things but I just don’t believe that this interconnectivity is going to solve these

01:02:18

problems and in so far as it is enlisted by the forces of the World Trade Organisation,

01:02:24

multinational companies and so on,

01:02:26

it’s not going to be working on the side of a kind of political view of things,

01:02:31

local economies, more sustainable agriculture and so on, that many of us hold dear.

01:02:36

Well, now I’ll sound even more like a slave of the IMF.

01:02:40

As I understand this crisis in the economics of Asia,

01:02:44

the way it works is, if you’re a third world nation, you can run your affairs any way you want.

01:02:52

Your banking policies, your labor policies, your resource extraction policies, you can do anything you want until you screw up, as Indonesia did. And when you screw up, these guys fly in on 747s with briefcases from the IMF,

01:03:10

and they say, it’s like losing a war.

01:03:13

They say, we’re taking over.

01:03:16

Here is your labor policy.

01:03:18

Here is your resource extraction policy.

01:03:21

Here’s how you’re going to revalue your currency.

01:03:24

Here is our plan for restructuring your entire society top to bottom.

01:03:29

And by God, if you don’t fall into line, we’re going to pull the plug on the money.

01:03:35

So one by one, these outlaw freewheeling operations do stumble and generate crises.

01:03:44

operations do stumble and generate crises.

01:03:49

And at that point, the web’s umbrella is extended over them,

01:03:55

and they have to then fall into line and join the global economy, which is run from Brussels and Geneva and London,

01:03:59

but which seems to produce a better result for most people

01:04:03

than allowing these nations to self-regulate themselves.

01:04:09

Well, well.

01:04:19

Well, you see, I mean, I think that’s one of the arguments why many people would distrust global capitalism,

01:04:27

the World Trade Organization, multinational corporations,

01:04:31

and indeed the World Wide Web, or at least the Internet.

01:04:35

This whole computer network is so bound up with the structure of economic and political power

01:04:40

that it’s hard to disentangle them.

01:04:43

And it’s hard to see that the this liberating force of

01:04:47

global intelligence at work in the system this rosy picture you portrayed to us of a

01:04:54

huge leap forward of consciousness of humanity moderated led aided by machine intelligence it’s

01:05:04

very hard to square that with the actual picture we see before us.

01:05:07

The political situation in Indonesia,

01:05:10

the degradation of the environment,

01:05:11

the burning of the forests,

01:05:13

the depletion of resources, and so on.

01:05:16

I just cannot see this optimistic picture.

01:05:19

And what I see as threatening realities,

01:05:24

as easily as you do, is just all being for the well and all being for the good in the best of all possible worlds.

01:05:31

Well, I think we’re on the cusp. I agree with you.

01:05:34

I think in five years, if we sit down and have this conversation, either you will agree with me effortlessly or I will agree with you effortlessly.

01:05:43

By that time, it will be clear either there will

01:05:46

have been catastrophic wars in Asia, the enormous collapse of economies spreading misery to millions

01:05:54

of people or the firm hand of these new global electronic modalities will have been exposed and people will be living in a world of, as you say, rosy expectations.

01:06:09

We’re in the narrow neck.

01:06:11

This is the heat of battle.

01:06:13

The fog of war has descended upon us here at the millennium.

01:06:17

But by 2002, 2003, it will be clear that the bifurcation has gone one way or another.

01:06:28

I don’t know, it just does remind me of a passage I read in On the Edge by Edward St. it’s what Ralph Abraham calls the sunset effect

01:06:53

said Kenneth

01:06:54

while there’s a beautiful sunset

01:06:55

even if the optical effects are produced by pollution

01:06:58

people won’t understand the magnitude of the crisis

01:07:01

so I think that this I mean your your sense of the

01:07:08

Mac I mean the magnitude of the crisis in an environmental sense seems to bear

01:07:13

no relation to your optimism about this computer system I just cannot all

01:07:19

together perhaps the problem that we will have to address without the intercession of machines is this population thing.

01:07:30

Because this passage you read directly impinges on that.

01:07:34

Whether or not the machines decide to keep us around may depend on whether we present them with a picture of falling populations and rising living standards, or whether we

01:07:45

present the AI with a spectacle of rampant, unpoliced population growth and resource extraction.

01:07:55

Because that’s linked into our biology, this may be the act of maturity, which the future

01:08:02

demands of us, that the machines will be largely irrelevant in impacting.

01:08:09

So I’m not entirely.

01:08:10

Well, this is a whole new debate actually.

01:08:12

The population thing assumes

01:08:14

there’s an equal consumption of resources.

01:08:16

Yesterday I got onto the Whidbey Island Ferry

01:08:19

and I found it very hard to struggle.

01:08:21

I was a foot passenger,

01:08:23

struggled past a recreational vehicle the size

01:08:25

of a school bus um a whole quantum leap in rv standards that i’ve not come across um where

01:08:32

members of the american population are consuming more than an entire indian village so i don’t

01:08:38

think it’s total population that’s involved and so i don’t see that as the principal crisis in fact well it’s certainly true that a

01:08:46

woman in a high-tech industrial democracy who has a child that child consumes about 800 percent

01:08:54

more resources in its lifetime than a child born in Bangladesh nevertheless it’s the populations of the high-tech industrial democracies that are most educated

01:09:07

and most susceptible to responding to the logic of global crisis and limiting their population

01:09:15

where do we preach population control bangladesh pakistan why because that doesn’t cause us any

01:09:22

inconvenience if if we would appeal to the women of the high-tech industrial democracies,

01:09:28

and do more than appeal to them, offer them incentives, cancel income tax,

01:09:34

cradle to the grave medical care, free links to the World Wide Web.

01:09:42

But we can’t just beat our breasts

01:09:45

over the population issue.

01:09:47

We have to recognize that it is related

01:09:48

to resource extraction,

01:09:50

and it is a problem driven

01:09:52

by the consuming policies

01:09:54

of the high-tech industrial democracies.

01:10:01

Well, when this trialogue began,

01:10:04

I sure didn’t expect it to end with a discussion of global capitalism.

01:10:09

And I find it interesting that Terence’s more optimistic view of the future,

01:10:13

at least relative to the then-current financial crisis,

01:10:17

actually held up at least when compared to Rupert’s more gloomy outlook.

01:10:21

But what is also interesting to me is that in the short run, Rupert was more in tune

01:10:27

with events.

01:10:28

Now, keep in mind that this trialogue was held in 1998, just before the dot-com bubble

01:10:33

burst.

01:10:35

However, over the long haul, at least so far, the establishment has been able to keep a

01:10:41

lid on this house of cards they call global capitalism.

01:10:44

Of course, the game isn’t over until the casino closes.

01:10:49

Now, there is one thing that I feel compelled to clear up right now,

01:10:53

and that is the concept of uploading ourselves, or downloading as Terence thought of it,

01:10:58

into a computer where a few subjective minutes in the default world could seem like a thousand years

01:11:04

when run through a very fast CPU.

01:11:07

Well, coincidentally, just before I began to review this talk,

01:11:12

I was on the phone with Bruce Dahmer,

01:11:13

and somehow the topic of uploading consciousness into silicon came up.

01:11:18

And in just a few short minutes, Bruce was able to explain to me

01:11:22

why this isn’t even close to the realm of technical

01:11:25

feasibility right now. And until then, I had kind of gone along with Greg Egan’s fantasy world of

01:11:32

human consciousness being preserved in a computer that he wrote about in Permutation City. But to

01:11:39

tell the truth, I had never really given any serious thought as to how one would do it and what the specs

01:11:45

would be for a machine that could hold my thoughts.

01:11:48

But this is an area where Bruce has spent a considerable amount of time looking into

01:11:53

the problem, and basically he brought to an end my fantasy of living forever in a computer

01:11:58

chip.

01:12:00

But now, instead of me trying to explain Bruce’s thoughts on this subject, I asked him if he would just say it again one more time and record it.

01:12:07

And I’m happy to say that he most graciously complied.

01:12:11

And I’d like to play that for you right now.

01:12:16

About this consciousness uploading thing or downloading thing,

01:12:22

well, let’s take a look at it.

01:12:24

You know, in the field of molecular dynamics,

01:12:28

where they simulate individual molecules forming from atoms

01:12:34

and those molecules interacting,

01:12:37

did you know that for simulating, say,

01:12:41

a very, very small volume of virtual chemicals,

01:12:46

maybe like a cubic millimeter, and this is very small,

01:12:51

and simulating all the reactions,

01:12:53

they’re now building supercomputers that will run flat out with thousands of processors to simulate a microsecond of reaction time in this artificial chemistry.

01:13:11

Well, think about that.

01:13:13

A microsecond.

01:13:14

Now, how long will this take?

01:13:16

Weeks.

01:13:17

Weeks or months.

01:13:19

So, consider that.

01:13:22

Consider how densely computational reality is.

01:13:26

Now this is simulation that was known as, as we said earlier, the molecular dynamics level.

01:13:31

You could simulate also at the quantum dynamics level a little bit lower

01:13:36

and get even more dense information.

01:13:40

So nature is amazingly computationally intensive.

01:13:45

So roll your clocks forward.

01:13:49

Think about what would it take to simulate a complete cell.

01:13:55

Now that’s a pretty small volume too,

01:13:57

but there’s amazingly complex machinery in there.

01:14:01

And if you talk to any biologists in the computational space, and they

01:14:07

look at one day the possibility of simulating one cell, it’s something that is years or decades

01:14:16

hence. This is something off the charts for the largest computing grids to do reliably,

01:14:27

largest computing grids to do reliably and to synchronize all the computers to get the different parts of the simulated cell to work together so that it behaves like a real cell. So the DNA

01:14:35

nucleus, the ribosomes and ribozymes and the protein mechanisms and all the organelles, and the membrane.

01:14:45

I mean, that’s a lot of machinery.

01:14:49

Now, if you think about it that way, and that’s the real stuff,

01:14:55

for those who believe that one day, and relatively soon, relatively around the corner,

01:15:02

we’re going to somehow upload or download our consciousness into some kind of silicon environment.

01:15:10

Well, they don’t really have the math right.

01:15:13

They don’t really have the understanding of the complexity of would you represent the human brain with its hundreds of millions of cells, nerve cells with dendritic connections and thousands of connections per nerve cell that go across to other cells, did you know that the number of pathways in the human brain

01:15:46

between the different nerve cells, a number of distinct pathways through it

01:15:51

has been estimated to be larger than the number of countable particles in the universe?

01:15:57

So looking at all that, if we have trouble even conceiving of how to build a simulation of a single cell

01:16:06

to work reliably as a, basically as something testable in biology, and people are now just

01:16:15

thinking of building simulations of small spaces of chemistry, like cell, for instance,

01:16:21

across a cell membrane, for example, which would be very, very valuable for all kinds of biomedical research.

01:16:28

How can we even conceive of building something in silicon

01:16:34

that would represent even the tiniest measurable fraction

01:16:39

of the complexity of the computing machinery that is the human brain

01:16:44

or any animal’s brain.

01:16:46

How can we even conceive of that?

01:16:49

It’s in a sense, this concept of uploading and downloading consciousness

01:16:55

is certainly in the realm of fantasy,

01:16:59

but it doesn’t even require us to give it any due, any airtime.

01:17:05

It’s so absurd.

01:17:08

We can look back in the late 19th century or into the early 20th century

01:17:13

when steam-powered or electric-powered automata were being brought out and paraded in public venues.

01:17:23

Remember, there’s a sequence at the New York World’s Fair of 1939-40.

01:17:29

There was a mechanical guy that was run by various servos and whatnot.

01:17:35

And, of course, people projected onto that guy and said,

01:17:39

well, gee, you know, having an artificial human with intentions, feelings, whatever,

01:17:47

is just around the corner.

01:17:48

And we look back at those films and that thinking and think how absurd these people are.

01:17:54

But we are no less absurd by thinking that we’re going to be placing something as complex as consciousness

01:18:03

or replicating even a tiny fraction of the human brain.

01:18:07

So rolling the clock forward a little bit more, there’s another school of thought called the singularity,

01:18:14

which says also that, well, there will be enough self-organizing complexity and technology

01:18:21

that one day it’ll all become conscious and it’ll all roll forward and

01:18:25

it’s kind of the the backwards version of the downloading equation it’s saying well consciousness

01:18:32

will emerge on its own anyway in these networks and you know we’re we’re a done deal because it

01:18:38

won’t need us anymore this is also a very popular idea in science fiction, and it’s an idea that should stay in science fiction,

01:18:46

because it’s really the same thing in reverse.

01:18:49

So what kind of silicon and what kind of software

01:18:53

is going to represent something that can come to consciousness?

01:18:57

It’s the same problem.

01:18:59

If it’s something that’s equivalent to an animal,

01:19:02

even a simple animal on the surface of the earth.

01:19:05

It’s way beyond the capabilities of the silicon to do that.

01:19:10

Who’s going to program this?

01:19:12

Is it going to emerge spontaneously from Microsoft Office?

01:19:16

Is Microsoft Word suddenly going to start to write words for you?

01:19:22

Is it going to have an AI in there?

01:19:22

to write words for you?

01:19:24

Is it going to have an AI in there,

01:19:27

an AI module will add a new button that will write your every utterance in the future?

01:19:32

I think this happened in an Edgar Allan Poe

01:19:35

horror story at some point.

01:19:39

So it’s really the same,

01:19:40

it’s the same argument or the same claim,

01:19:43

which is also quite absurd.

01:19:46

So in a sense, looking at all this, what does it take to even get to the first step?

01:19:54

Well, in the project that I’ve been involved with for a couple of years called the Evo

01:19:58

Grid or the Evolution Grid, we’re just trying to figure out pathways to create an artificial chemistry that can show a

01:20:08

little bit of emergence of biologically interesting phenomena that might lead one day to a bigger

01:20:15

simulation where we could show sort of prototypical, you know, artificial biological structures that might get put together into what you would call a truly emergent artificial life

01:20:29

or, you know, a predecessor to any truly formed artificial life.

01:20:35

So this is a project that I conceive will last for decades because the problems are so difficult.

01:20:43

last for decades because the problems are so difficult.

01:20:51

So the real projects, the real efforts to create things that are a little bit like biology and nature in technology are very hard.

01:20:56

There are very few of them going on.

01:20:59

Most of technology is tinker toy assembly for various mundane purposes,

01:21:06

such as playing an MP3 file for you or vacuuming your room.

01:21:11

There’s no intelligence or consciousness coming from those projects.

01:21:15

So there are very few projects actually tackling things close to what might be thought of

01:21:22

as the singularity or downloading consciousness.

01:21:24

So there’s a whole class of people who can talk about this all the time,

01:21:29

but they’re not actually working on the problems.

01:21:34

So in a sense, if those people started to work on these problems,

01:21:38

they would probably stop talking about this,

01:21:41

and they would say, you know what, we’ve concluded that this is so difficult,

01:21:45

and there’s so many unknowns, that frankly, we were just, you know, we were talking science

01:21:51

fiction fantasy here, and we apologize if we got anyone ruffled or upset. So anyway,

01:21:59

that’s just one opinion and viewpoint from the trenches here on this concept of downloading, uploading consciousnesses and the singularity.

01:22:12

Bruce Dahmer signing off.

01:22:16

You’re listening to The Psychedelic Salon, where people are changing their lives one thought at a time.

01:22:49

Well, so there you have it. changing their lives one thought at a time. consciousness during a singularity or something like that. But my lame jokes aside, I do want to thank Bruce for taking the time to record that little sound bite for us and to move the conversation about this topic onto a more solid footing.

01:22:56

I also would like to mention that just two months after the talk we just heard, Terrence

01:23:01

and Ralph gave a two-hour presentation at Omega Institute that they titled The Worldwide Web and the Millennium. Thank you. Interesting conversation, in case you haven’t heard it already. Now, there’s one more recording from this trialogue session, and I’ll try to get that out in the next few days.

01:23:29

But before I go, I’d like to read part of an email I received on Facebook the other day from a fellow salonner.

01:23:36

And here’s part of what this salonner had to say.

01:23:39

Hi, Lorenzo.

01:23:41

I have a story to share and a warning concerning curanderos traveling Europe and

01:23:46

conducting ayahuasca sessions. A friend of mine recently got invited to an ayahuasca circle with

01:23:51

an Ecuadorian shaman. Having no experiences with major psychedelics, he discussed the idea of

01:23:57

participating in this circle with me. I did some research on arrowwood and other sources on the

01:24:02

quality of these traveling shaman, and I found a lot of testimonies on positive and powerful experiences.

01:24:08

So we decided to give it a shot.

01:24:10

The event was quite expensive, but this seemed to me a good chance to meet the others.

01:24:16

Very little info was given to us on the ceremony,

01:24:18

but we got the dietary requirements and a FAQ on MAOIs.

01:24:23

We started the diet one week in advance to get

01:24:26

our focus on the ordeal we were soon to experience. I had a lot of conversations with my friend about

01:24:32

the do’s and don’ts of psychedelics to make sure he was well prepared. The day before the ceremony,

01:24:38

I felt ill and, with a sad heart, decided that in my state I was not fit to participate in an

01:24:44

all-night ceremony.

01:24:45

It felt like a terrible waste to not attend after a week of the diet, but my respect for the medicine

01:24:51

kept me from it. My friend decided to attend on his own and went to the ceremony the next day.

01:24:56

Of the ceremony, I only have his account, but I find this quite interesting and disturbing.

01:25:02

Arriving at the camp area, he found the place disorganized. The ceremony was supposed to start at 2100 but was

01:25:08

delayed to 2300 because of poor planning. Since all the participants had been

01:25:13

fasting since 0300, the two extra hours of waiting felt like an eternity. There

01:25:19

also seemed to be a lot of confusion around the ritual setting, direction of

01:25:22

participants, etc. The circle consisted of 30 people, many of whom had never tried any psychoactive material,

01:25:30

except caffeine, nicotine, etc.

01:25:33

And the information given was poor.

01:25:35

Shortly after the ayahuasca had been ingested, a woman in the circle got what can only be

01:25:40

described as a panic attack.

01:25:42

She was not allowed or given help.

01:25:44

This concerns me deeply.

01:25:46

I know ayahuasca is a very comforting and powerful hallucinogen. This makes it even

01:25:51

more important to help and support participants. The so-called sitters all drank the brew and

01:25:57

were of no help to the group whatsoever. It was a rough ride for many people in the group

01:26:02

and the ceremony lasted until 0900 the next morning.

01:26:06

Participants were not allowed to go to sleep when they felt necessary.

01:26:10

My friend, on the other hand, got through the experience unharmed, sticking with the basic psychedelic rules I had coached him in.

01:26:17

But after the ceremony, he was filled with anger toward the organizers, seeing them as frauds and opportunists, and he felt suspicious

01:26:25

of the shaman.

01:26:26

Knowing the stories of shaman and their staring-through-your-soul personalities, it seems unlikely that this

01:26:32

was an especially skilled one.

01:26:35

My point of telling you this story is that these kinds of gatherings may cause serious

01:26:38

damage to the unprepared, seeing how little information was given to the beginners and

01:26:43

how little guidance was given by the shaman-slash-sitters. Thank you. experience, but point out that you should speak with someone who has participated in the ceremony with this shaman before.

01:27:06

I think this story underlies the importance of this.

01:27:11

Well, I want to thank our fellow salonner that sent this in to me very much for passing

01:27:16

this along, because it is important information, I believe.

01:27:20

And this story, I think, should probably serve as a warning to anyone who’s thinking about participating in one of these circles.

01:27:28

Personally, I would never attend a session unless I had a close friend whose opinions I respected, by the way,

01:27:35

tell me that they had already had an experience with a particular ayahuasquero.

01:27:39

And in most cases, I would actually want more than one recommendation.

01:27:44

You know, there’s no rush here.

01:27:46

Mother Ayahuasca will find you when you’re ready.

01:27:48

So don’t force the issue.

01:27:50

You’ll know if the situation feels right.

01:27:52

And if it doesn’t, then, hey, screw up your courage and get the heck out of there.

01:27:58

This isn’t a recreational experience, and so it isn’t to be toyed with.

01:28:04

Just be really careful here,

01:28:06

because, hey, we need you for the long haul, too.

01:28:10

Well, that should do it for now,

01:28:12

and so I’ll close today’s podcast

01:28:14

by reminding you that this and most of the podcasts

01:28:18

from the Psychedelic Salon are freely available

01:28:20

for you to use in your own audio projects

01:28:22

under the Creative Commons Attribution Non-Commercial Share Alike 3.0 license. Thank you. philosophy behind the psychedelic salon? You can hear all about it in my novel, The Genesis Generation,

01:28:46

which is available as an audiobook

01:28:47

that you can download at

01:28:49

genesisgeneration.us

01:28:52

And for now,

01:28:54

this is Lorenzo signing off from

01:28:55

Cyberdelic Space.

01:28:57

Be well, my friends. Thank you.