Program Notes

Guest speaker: Earth and Fire Erowid

Erowidknowledge2.gif

Earth & Fire Erowid return to the Psychedelic Salon with a talk they gave at the 2002 Mind States conference that was held in Jamaica. In this presentation, Fire and Earth begin what has become an ongoing discussion among psychonaughts regarding ways in which to judge the validity of information in an age when information overload has become an everyday experience for many people.

Previous Episode

040 - Psychedelogy_ A Novel Paradigm of Self

Next Episode

042 - Using Psychedelics for Rational Work

Similar Episodes

Transcript

00:00:00

3D Transforming Musical Linguistic Objects

00:00:10

Greetings from cyberdelic space, this is Lorenzo and I’m your host here in the psychedelic salon.

00:00:22

So, how have you all been since we last got together?

00:00:27

Anything interesting happening in your part of the world?

00:00:30

Here in the Psychedelic Salon, we’ve been doing a little spring cleaning.

00:00:35

You know, I had to get it done before the 21st of this month, you know,

00:00:39

otherwise it would be summer cleaning.

00:00:41

I wish I wasn’t like this, but sometimes it seems like I never do today

00:00:46

what I can put off until tomorrow. Now that it’s tomorrow, I seem to have more things

00:00:52

to do here than there are hours in the day. Of course, it’s all about priorities, and

00:00:57

my first priority is to do what I can to enjoy life a little bit, so that means some things like spring cleaning just have to wait for a while.

00:01:07

Anyway, while I was putting a few things away,

00:01:10

I came across some more talks from the Mind States conference

00:01:14

that was held in Jamaica in 2002.

00:01:17

A while back, I podcasted a few talks from that conference,

00:01:20

and one that’s still getting a lot of downloads was our podcast number

00:01:25

26 by Earth and Fire Arrowhead, which they titled Drug Geeks.

00:01:30

So I thought that today it might be interesting to listen to another one of the talks by Earth

00:01:35

and Fire on that conference, and this one is titled A Proposal for Grassroots Peer Reviews

00:01:43

of Important Knowledge.

00:01:44

A Proposal for Grassroots Peer Reviews of Important Knowledge.

00:01:52

Actually, the title Earth and Fire originally gave to this talk was simply Grassroots Peer Reviews,

00:02:00

but I added a few words to put it in a little better context now that it’ll be heard outside of the framework of the Mind States Conference where the shorter title was all they needed.

00:02:03

United States Conference where the shorter title was all they needed.

00:02:10

So Earth and Fire, if you’re listening out there, I hope you don’t mind me making that little editorial change.

00:02:18

Those of you who have heard some of the other talks from that conference will probably remember that these recordings were not made with professional equipment.

00:02:26

In fact, if it wasn’t for the foresight of our friend Kevin Whitesides, the talks at this conference would have been lost forever.

00:02:30

Kevin, fortunately, recorded these talks for his own use and then was good enough to pass them along to us.

00:02:34

So the fact that there’s some background noise and that the room had a big echo

00:02:38

wasn’t really important at the time they were making the recordings,

00:02:42

and I hope the sound quality doesn’t distract you from the ideas that Earth and Fire are proposing in this talk. Thank you. www.arrowid.org In my humble opinion, there simply is no better place in the world

00:03:06

to find reliable, up-to-date information about psychoactive substances than at Arrowid.

00:03:12

And a lot of other people think so, too,

00:03:14

because on average, over 50,000 people come to Arrowid’s site every day.

00:03:20

Just think about that number for a minute,

00:03:22

particularly if you’re sitting out there thinking you must be one of the only people left who are thinking about these things.

00:03:29

Our numbers are huge, my friends, and in case you haven’t noticed, the psychedelic community isn’t just some marginalized, fringed group of whacked-out drug users.

00:03:40

We are, in fact, I believe, the leading edge of the wave of consciousness that is actually creating a sustainable human culture.

00:03:49

In fact, I also think we are the best hope for our species to make it through the difficulties ahead that we’ve brought upon ourselves since the beginning of the industrial age, at least.

00:04:01

But I digress again.

00:04:03

industrial age at least, but I digress again.

00:04:09

After we listen to today’s talk, I’ll pass along some more information about Arrowhead.org and tell you how you can get involved in discussing the ideas they present here, which I believe

00:04:16

go to the heart of all our information gathering, namely, how do you know what information to

00:04:22

trust and what information not to trust?

00:04:24

Namely, how do you know what information to trust and what information not to trust?

00:04:28

You know, we all have our own processes for doing this, of course.

00:04:32

Even though we may not realize it, we have some sort of a system.

00:04:35

I know my system starts with one single absolute.

00:04:42

That is, never, as in never, trust or believe anything that the U.S. government says without first personally verifying the information.

00:04:46

When it comes to what members of the Bush crime family are saying, it seems to me that

00:04:50

it’s best not to believe anything you hear or read and only believe half the things you

00:04:55

see, if that.

00:04:57

Well, there I go again.

00:04:58

Sorry about that.

00:05:00

So, without any further political comments, here are Earth and Fire Arrowhead at MindStates 2002, inaugurating a discussion about possible ways to build a network of trust around vital information.

00:05:22

The problem we talked about in our first talk about how the accumulation of knowledge that humans have access to

00:05:26

is just growing at a rate that is difficult to really even conceive of.

00:05:30

The access to knowledge is getting out of control.

00:05:33

And our ability to manage that information

00:05:36

and to understand what it means and make sense of it

00:05:38

and put it in the proper perspectives is a sort of light headed for most.

00:05:42

The tools for keeping track of that aren’t really keeping up with the rate

00:05:47

at which the access to information is expanding.

00:05:51

And partially, for me, in thinking about that,

00:05:54

it’s not only because the overall quantity of information

00:05:58

is becoming so great,

00:06:01

but that as the quantity of information becomes so great, each particular piece that

00:06:07

gets published, or each piece of information that’s sort of thrown out there into the collection

00:06:13

of information, is the context, the other pieces of information around it that are related

00:06:19

to it, have such a strong impact on what that piece of information means.

00:06:23

to it have such a strong impact on what that piece of information means.

00:06:31

That any particular, 200 years ago, things which were printed had a particular authority to them, which things which are printed or published on the internet today don’t necessarily

00:06:35

have.

00:06:36

You can be a ninth grader writing about physics, and as an audience, you wouldn’t necessarily

00:06:42

know that, and it’s very difficult to know what level of trust to put in any particular piece of information you read,

00:06:48

which is more difficult than perhaps pre-internet.

00:06:52

And so with the level of knowledge that are available

00:06:56

in so many different fields,

00:06:57

even in micro-micro-niches you have the problem

00:07:01

where one single human is, for many places, for many types of knowledge,

00:07:06

no one human can keep all of the information in their head at any given time in order to

00:07:10

make sense of it.

00:07:11

And so the tools in order to make sense of the information, even for one person, let

00:07:15

alone a population, are sort of being developed at this point to try to facilitate better

00:07:21

understanding so we can really move ahead.

00:07:22

facilitate better understandings or we can really move ahead.

00:07:26

The other

00:07:27

major piece

00:07:30

of the problem is that

00:07:31

traditionally

00:07:32

the expense and investment necessary

00:07:35

to publish was so large that it was

00:07:38

restricted to a very small number of people.

00:07:40

And now the internet

00:07:42

makes it possible for a ninth grader

00:07:44

or something to publish their physics paper that they’ve written. And so the internet makes it possible for a ninth grader or something to publish

00:07:45

their physics paper that they’ve written. And so the costs of publishing on the internet

00:07:50

have gone way… The costs of publishing on the internet have gone way down.

00:07:56

One of the problems… The reason that it’s a problem that traditionally there has been

00:07:59

this restriction on who can publish is that the people who are restricting what is published

00:08:05

are exerting an editorial control, which is good.

00:08:10

You don’t want to necessarily read everything that everyone writes on every topic.

00:08:13

But what happens to that is that the people who are exerting editorial control are often

00:08:19

making choices on the basis of values and decisions which are not directly caused by the things that you want them to be caused by.

00:08:29

So you have the definition of science, which is sort of, you know, one way to look at it.

00:08:36

So, for instance, the definition of science, one definition of science might be that you’d like to see

00:08:42

a long-term development of valid, factual

00:08:46

data that can be useful to

00:08:48

create models of the world that we can use

00:08:50

to make predictions about things that will happen

00:08:51

and then build further models off of those.

00:08:54

And in the publication of science

00:08:56

that happens, there’s this peer-review

00:08:58

system that has been traditionally

00:09:00

the last, I don’t know how long that’s been going on.

00:09:03

There was a foundation of the Royal Society

00:09:04

by Newton in the late 1600s.

00:09:07

Late 1600s.

00:09:07

Early 1700s.

00:09:09

The system is basically that there are people who are the publishers,

00:09:12

who own the system of publishing, and there is a top set of editors,

00:09:18

and they choose a series of what they call peers,

00:09:20

who are people who are supposedly knowledgeable in the area.

00:09:22

And when the four papers are published, they take the paper,

00:09:26

they check the top editorial staff, decide which ones to send to the reviewers,

00:09:30

the peer group.

00:09:31

Then of the ones they send to the peer group,

00:09:33

the peer group gives feedback about how good the papers are,

00:09:36

and they make critiques and edits and things like that.

00:09:38

And then usually that goes back, I think that goes back to the author,

00:09:40

and then it comes back and gets published in the journal,

00:09:44

if the peers don’t hate it.

00:09:46

The trouble is that inside that peer review system

00:09:49

that we have now, and it’s been this way for a long time,

00:09:52

there are a lot of forces besides factual fact

00:09:55

and validity which go into whether something is published.

00:09:57

So money, what can be funded to produce the research,

00:10:02

to get somebody to publish it,

00:10:07

to advertise that it’s published enough so that the journal generates interest

00:10:10

in order to make themselves known,

00:10:13

to be considered respected.

00:10:16

And there’s also just sort of general politics,

00:10:18

like you’d expect.

00:10:19

The viewpoint of the publishers and of the reviewers

00:10:21

play a strong role in what they’re willing to publish.

00:10:26

And just the

00:10:28

general politics, like we were talking

00:10:30

about the other day, that NIDA funds

00:10:31

whatever they claim to fund, 85% of the

00:10:33

research into

00:10:35

drugs abuse. And so

00:10:37

they are not going to fund things

00:10:39

and they are not going to fund researchers

00:10:41

to do other research who have

00:10:44

published things which they don’t agree with the viewpoint of.

00:10:46

This is what I’m told by researchers.

00:10:49

So that’s sort of a look at what the problem is.

00:10:53

In the concept of the memes,

00:10:55

you’re creating an evolutionary environment for ideas

00:10:57

where ideas survive on the basis of forces

00:11:01

which you would not necessarily choose

00:11:03

and are not explicit.

00:11:06

Science, when they publish an article,

00:11:08

doesn’t say, oh, and we chose this one

00:11:10

because we got a lot of money.

00:11:14

That’s nothing like that’s ever going to show up.

00:11:16

It’s pretty complicated.

00:11:18

I’m not meaning to be saying that science is publishing

00:11:21

on the basis of dollars.

00:11:23

It certainly has an impact.

00:11:26

So one of the examples that is somewhat illustrative of the problem is that there is this recent

00:11:36

Riccardi study.

00:11:37

George Riccardi is an MDMA researcher who published in the journal Science, which is

00:11:41

one of the most respected and popular peer-reviewed science

00:11:45

journals in the world.

00:11:46

And he published, I think it came out eight days ago, whatever it was, it was formally

00:11:53

published, this article which claimed to show severe dopamine system damage from taking

00:12:00

MDMA, where previously we knew that MDMA caused serotonin damage at high doses in rats and non-human primates.

00:12:07

We now, this research claims now, sort of what the quote was, was common recreational

00:12:13

doses in humans would cause severe dopamine damage in rhesus monkeys and baboons.

00:12:22

And that further, the author suggests that this may cause

00:12:26

Parkinson’s disease further down the road,

00:12:29

even though the monkeys that they gave this to

00:12:31

didn’t get Parkinson’s or Parkinson’s symptoms,

00:12:35

they sort of thought that maybe in the future,

00:12:38

humans would get, and there are reasons

00:12:40

to support that view.

00:12:44

So this is published.

00:12:46

It’s not only published in Science, which is a peer-reviewed journal,

00:12:50

but it’s published in a summary in Science,

00:12:54

which is a much shorter version of the article,

00:12:57

which is not written by the authors, but written by someone else.

00:13:00

And that thing in Science was, again,

00:13:04

sort of another level of kind of rhetoric

00:13:06

of sort of scariness of how

00:13:08

frightening, oh my god, this is

00:13:09

I can’t really quote, but this is

00:13:12

really bad for ex-succesors.

00:13:13

I think it’s not the

00:13:15

tragedy of ex-succesors. Anyway,

00:13:18

published on CNN, New York

00:13:19

Times, all of the

00:13:21

popular media covered this

00:13:23

thing. They covered it to varying degrees,

00:13:26

and I think that Brandy has some things that she brought up there to talk about it.

00:13:28

The trouble is that within that system of publication,

00:13:32

within the system of the peer-reviewed journals,

00:13:35

the feedback about that publication,

00:13:38

like if I have a critique of that publication,

00:13:40

or if someone who is an expert in the area

00:13:42

has a critique of a publication,

00:13:44

the only way to get that sort of attached in the system to that publication is to get a response

00:13:50

letter published in science itself, sort of formally.

00:13:54

And that, if science chooses not to publish your response to that, it sort of doesn’t

00:13:59

really get attached in the system, they don’t really ever interact. Yeah, and in terms of letters of response,

00:14:07

an interesting situation that we ran into last year, I guess,

00:14:11

was that in the New England Journal of Medicine,

00:14:13

there was an article published about drug information websites on the Internet.

00:14:17

And they named a bunch of websites.

00:14:19

We were one of them.

00:14:20

They named probably 10 maybe.

00:14:24

And went through and gave sort of very brief critiques of the information sources,

00:14:30

primarily pointing out problems with them, inaccuracies, you know, dangers,

00:14:34

sort of focused on dangers of the information that they were presenting

00:14:38

and how, you know, it was reasonably negative.

00:14:41

And so we sent a, we wrote up a letter of response of, you know, spent a reasonable amount of time and wrote up a letter of response, and sent it into them,

00:14:49

and they refused to publish that. And we’ve actually been told since then that it’s relatively

00:14:53

rare that you would have an article published specifically critiquing an individual or an

00:14:57

organization, and they would refuse to publish the response from that organization that was

00:15:01

being criticized. But so in a situation like that, there’s no…

00:15:05

They actually chose not to post any responses

00:15:07

from any of the organizations that had been criticized.

00:15:10

And so there’s no

00:15:11

response within the system. There’s no

00:15:14

reaction able to be

00:15:16

presented to the public in terms

00:15:18

of within the original publication.

00:15:20

So as a weird

00:15:22

side note, I talked to the person who wrote

00:15:24

the article and sent him our response,

00:15:27

and he said, oh, this is a great response.

00:15:29

I really think this should be published.

00:15:32

He said that he talked to the head editor of the New England Journal of Medicine

00:15:36

and recommended to them that they publish the response,

00:15:38

and he was told that no, they weren’t going to publish the response

00:15:40

because they couldn’t publish anything under the names of Fire and Earth Aerobics.

00:15:42

going to publish the response because they couldn’t publish anything under the names of fire and earth and air.

00:15:45

So that’s

00:15:46

sort of another example of the type of

00:15:48

an evolutionary force on a

00:15:50

meme or a pressure on an idea

00:15:52

that is sort of orthogonal

00:15:53

or unrelated to the idea

00:15:55

itself. That they didn’t like our names

00:15:57

so they didn’t publish it.

00:16:00

They can have that reasoning

00:16:02

and we chose to publish with the names

00:16:04

earth and fire because that’s how we choose to publish.

00:16:06

We could have made a different choice about that and put Dan Johnson and Jill Anderson on there,

00:16:10

and they probably would have published the name thing.

00:16:11

Maybe not. Who knows?

00:16:15

So, good.

00:16:17

So that’s sort of a picture of kind of the problem.

00:16:21

Does that communicate it at all?

00:16:24

Okay.

00:16:26

So, one of the

00:16:27

an idea for a solution

00:16:30

and a view of what can

00:16:31

change in the current

00:16:33

internet publishing world

00:16:35

that was unavailable

00:16:37

because of the expense of

00:16:39

the previous publishing type is a

00:16:41

type of

00:16:42

feedback system into the system of publishing, which

00:16:47

is going on right now. This isn’t a totally novel idea. It isn’t intended to be suggested

00:16:51

as a completely novel system. It’s a natural process where, I mean, already in a much less

00:16:58

formal way, we took this responsibility we wrote to the New England Journal of Medicine

00:17:01

and we put it on our site and we published it in our newsletter and we told people about it

00:17:05

and we sent it to a couple of email lists.

00:17:07

And so within

00:17:08

a community of people who read the stuff that we

00:17:11

write, some

00:17:13

portion of those people saw the response.

00:17:15

So there is a little bit of a feedback

00:17:17

mechanism there, but it is not

00:17:20

there’s sort of a fundamental problem

00:17:22

when the original article itself

00:17:23

is not attached to that opinion.

00:17:25

There’s no way to trace a line from one to the other.

00:17:29

And so the solution, one solution, is to come up with systems of publication which attach feedback back into the articles themselves.

00:17:44

And simple examples of this are there’s a site called ePinions,

00:17:50

where articles can be sort of rated by individuals who visit the site.

00:17:52

You visit the site, you can give this a one to five.

00:17:54

There’s a lot of systems like this.

00:17:56

There’s Slashdot.

00:17:58

I don’t know if people are familiar with Slashdot.

00:18:01

It’s sort of a web board.

00:18:02

People can post whatever they want.

00:18:06

But based on, I think, the rating of the author and individual reviewers,

00:18:09

there’s sort of a rotating team of reviewers

00:18:10

who give articles particular ratings.

00:18:13

And so that’s sort of a system

00:18:16

that’s a little bit like what we’re proposing.

00:18:19

And eBay is another one.

00:18:20

If people have used eBay,

00:18:21

when you buy something,

00:18:22

you can give a seller a number, I think

00:18:25

a 1 to 5, 1 to 3?

00:18:28

5 stars, something like that?

00:18:30

Rating. And there

00:18:32

are reasons why those don’t really, those have a lot of

00:18:33

problems with them. Well, so

00:18:35

the problem, like for instance on eBay,

00:18:38

one of the problems that we experienced

00:18:40

was, I give,

00:18:42

so after you buy

00:18:44

something on eBay,

00:18:47

you have the opportunity to,

00:18:49

both the seller and the buyer have the opportunity to give each other

00:18:50

between one and five stars.

00:18:53

And one of the examples,

00:18:55

one of the things that happened to me

00:18:56

was that I chose to buy something,

00:18:58

I won a bid on something on eBay,

00:19:01

and the woman had listed that she took Visa,

00:19:04

MasterCard, PayPal, whatever.

00:19:06

And after I bought it, she said,

00:19:07

oh, I don’t take PayPal or Visa anymore.

00:19:09

If you’d like to pay for this,

00:19:11

you have to send a check three weeks ahead of time,

00:19:13

and then I’ll send it to you.

00:19:14

And I said, you know,

00:19:15

I’m not interested.

00:19:16

Like me.

00:19:17

And she said, oh, well, I’ll destroy your rating then.

00:19:21

I’ll give you a one on this system,

00:19:23

and I’ll write that you’re the worst person ever and you lied to me

00:19:26

and no one should ever tell you.

00:19:27

And so

00:19:28

there’s a

00:19:30

lot of ways to gain systems.

00:19:34

If you don’t design systems that are set

00:19:36

up to have some way of

00:19:37

limiting the way that you can

00:19:39

hack them, then

00:19:41

that kind of stuff is pretty easy.

00:19:43

A lot of people don’t see the curve, right?

00:19:46

Sure, sure, sure. But that requires that somebody go to the trouble of checking out the person

00:19:54

who’s giving you a bad rating to make sure to see what their ratings are and how, you know…

00:19:58

And that isn’t explicit in the system.

00:20:00

Right, it’s not explicit.

00:20:01

So there are problems with a number of those types of systems. The solution that I have in mind for sort of a picture of a solution or an idea of a solution

00:20:09

is to try to create a system in which the evolutionary forces which cause the shift of ideas through time

00:20:17

and the forces that are causing them to change will be what you want them to be. You are choosing the forces that are causing them to change will be what you want them to be.

00:20:25

You are choosing the forces that are

00:20:27

so if you want it to be science

00:20:29

if you want it to be fact and validity

00:20:31

the system by which

00:20:33

fact and valid facts

00:20:35

sort of verbal up to the top

00:20:36

are factual and

00:20:39

fact and validity based.

00:20:43

And I think it’s also important to say that

00:20:45

there’s two sides to it. One is which

00:20:47

the forces that are

00:20:49

causing ideas to burgle to the top are the ones that you

00:20:52

want them to be, or at least

00:20:53

you’re able to tell what those forces are.

00:20:56

Even if you can’t specifically say, I want them to be

00:20:58

you know, in validity and

00:20:59

you know, that you can

00:21:02

at least identify which forces are causing

00:21:04

the particular ideas or documents or pieces of knowledge to head to the top.

00:21:10

So what are the key concepts?

00:21:12

So here’s sort of a sketch of the solution.

00:21:15

One of the key concepts is trust.

00:21:18

One of the things that while we do error with that’s really palpable when we interact with people

00:21:23

is that for whatever reason, people a lot of times

00:21:26

trust what we say, sometimes

00:21:28

to a scary degree.

00:21:29

People accept things that are on our site

00:21:32

as factual when maybe that

00:21:34

wouldn’t be exactly how I would tell you

00:21:36

to take it.

00:21:39

And so,

00:21:40

there is this idea of trust

00:21:41

that people have, in particular authors

00:21:44

and particular publishers.

00:21:49

Imagine a system where each author has a unique login.

00:21:53

Most people are familiar with the idea of a login.

00:21:55

Each author is assigned a trust rating of some kind.

00:22:01

This is sort of a symbol case.

00:22:04

Let’s say Arrowhead assigns each of a simple case. Let’s say Erowit assigns each author

00:22:06

a trust rating. So I’m going to put up a document by

00:22:07

John, and I’m going to say that

00:22:09

John gets a trust rating of

00:22:11
00:22:13

Let’s say 1 to 10, with 10 being good.

00:22:15

And we would actually

00:22:17

categorize those. I would say that I trust

00:22:19

John on chemistry.

00:22:21

I give him an 8 in his knowledge of chemistry.

00:22:24

I give him a 2 in his knowledge of cooking. I give him a 2 in his knowledge of

00:22:26

cooking. If he writes articles

00:22:28

about cooking, he’ll probably take

00:22:30

those with a grain of salt. But for chemistry,

00:22:32

I’m saying that I trust him

00:22:33

in his chemistry knowledge.

00:22:36

So, in a very simple

00:22:38

context, I might

00:22:39

imagine you’re using a search engine. You have all

00:22:42

the documents, all of the authors of all of the

00:22:44

documents are rated in this model.

00:22:47

And you do a search.

00:22:47

And you’ve decided that you…

00:22:48

The Arrowhead has decided that we rate Sasha a 10 on chemistry.

00:22:53

And so when you do a search, you can sort the results of the documents.

00:22:57

You were looking for phenethylamines or something.

00:22:59

And you sorted the documents by how much to trust them or how much Arrowhead trusted them.

00:23:04

And the ones with Sasha as the author would go to the top.

00:23:07

And maybe the ones written by Jimmy, Bobo Jimmy or whatever,

00:23:11

would be kind of at the bottom

00:23:13

because we don’t think too much of the science.

00:23:17

And then we go to the next step.

00:23:19

We go to the next step to where

00:23:21

individual people are able to rate documents.

00:23:25

So the trust level is for authors.

00:23:27

The trust level is for people.

00:23:29

You have a trust level in a category for a person.

00:23:32

Documents have a separate rating system where an individual can say,

00:23:36

this particular document that I read, maybe I don’t know the author,

00:23:39

maybe I do know the author, but I read this and I say,

00:23:42

I think this is great, this is right on, I give this

00:23:45

a 10.

00:23:46

Now you’ve got a document that has a 10 rating by Erowith, say, by an author who has a 10

00:23:53

rating, and that combination, in say a search results example, can cause documents, I do

00:23:59

a search for documents about phenethylamines, papers by an author with a 10 that have a

00:24:03

rating of a 10,

00:24:07

they’re right at the top when you do a search,

00:24:09

you know, given your search terms.

00:24:14

So imagine when you’re viewing a document,

00:24:15

so this goes a little bit back to the New England Journal of Medicine article.

00:24:17

Imagine when every person

00:24:19

who was reading the New England Journal of Medicine article

00:24:21

had direct access to all of the critiques

00:24:23

written by anybody who ever chose to write a critique.

00:24:26

And in

00:24:28

Erowitz,

00:24:29

you’re on Erowitz,

00:24:31

you’re on some website that you are interested

00:24:34

in what their opinion is.

00:24:36

We might rate

00:24:37

the reviews which have been written.

00:24:40

Each are associated

00:24:42

with the trust rating of the author

00:24:44

as well as potentially being rated themselves.

00:24:47

So you might be able to…

00:24:48

So someone writes a response.

00:24:50

We write a response in the New England Journal of Medicine.

00:24:53

And the author of the original article

00:24:55

critiques our response.

00:24:57

And so each document that you’re looking at,

00:24:59

you can see all of the associated responses

00:25:02

to those documents.

00:25:03

And each of those responses can have responses to them.

00:25:06

Every response can have a rating.

00:25:08

Every person who writes a response has a trust level.

00:25:12

So it’s all interacting all the way down the line.

00:25:16

I think one point that I would throw in at this point is that sometimes this sounds a

00:25:21

little bit like you’re voting for the accuracy of a document.

00:25:28

That’s not what the system is.

00:25:29

The system is not intended to be a…

00:25:33

Popular.

00:25:34

It’s not a popular vote for the veracity of a document.

00:25:37

Because 10,000 people think a particular fact is true doesn’t make the fact true.

00:25:41

The myths about various psychoactives are an extremely good example. Just because everyone thinks that mescaline comes in microdots doesn’t mean the fact true. The myths about various psychoactives are an extremely good example. Just because everyone

00:25:45

thinks that mescaline comes in microdots

00:25:47

doesn’t mean it’s true.

00:25:49

Mescaline doesn’t come in microdots, right?

00:25:51

No matter how many people think it does, it doesn’t.

00:25:54

So the idea is to create

00:25:55

a system by which you can have

00:25:57

basically

00:25:58

what you’re creating is a trust tree

00:26:02

in some way. You’re creating a

00:26:03

trust network.

00:26:06

Now imagine that

00:26:07

so the idea of

00:26:09

the title of this design is called

00:26:11

Grassroots Peer Review. The idea is

00:26:13

that in the simplest system

00:26:15

or a simpler system, Arrow would

00:26:17

like let’s say we design the system and Arrow would

00:26:19

assign the trust to all of the authors.

00:26:21

So we decided Sasha is the smart guy in

00:26:23

chemistry and Mark’s the smart guy in Mark stuff. VRML.

00:26:29

Yeah, we have 10 there.

00:26:32

Excellent. And we go through, as documents are submitted, we sort of choose to be able

00:26:40

to rate both the author and the document. And the reason that this is particularly useful for us

00:26:46

is that as sort of, and this has become really palpably

00:26:51

necessary for us.

00:26:52

When we first started Arrowhead, we

00:26:54

were able to read every single document we published.

00:26:58

Every single document was either read by one or a lot of times

00:27:00

both of us.

00:27:01

And we would have discussions about that.

00:27:02

And we would have discussions with our friends

00:27:04

as those were published.

00:27:06

The size of the site has gotten out of control.

00:27:08

And it’s just like the Internet in general or information in general.

00:27:13

All of the information that’s presented isn’t going through any kind of a really formal review process.

00:27:18

We have an internal informal review process by which documents come in,

00:27:23

we send them out to a group of reviewers, and we think we’ll be useful for this.

00:27:27

But a lot of documents either get lost in that system and you can’t keep track of them,

00:27:32

or for instance, like experience reports or a lot of other things, we might let TJ answer

00:27:40

a question or someone answer a question on the site in Ask Airwit.

00:27:48

The more people that are involved in the approval of information that gets published on the site, the harder it is to track

00:27:50

as an organization the reliability

00:27:54

of the information. And another aspect of that is

00:27:57

as time passes, the site’s been up for six years now,

00:28:00

information that was true six years ago might not be true now.

00:28:03

The legal status of some particular chemical, which has actually changed between when we first published it and now,

00:28:08

nobody might have noticed that that piece of information needs to be fixed.

00:28:12

So one of the queries we want to run as editors of a library is,

00:28:18

what’s the document which was reviewed the longest ago?

00:28:22

Or what’s the document which was reviewed

00:28:25

by the least trusted person?

00:28:28

Or which document was rated lowest

00:28:32

by the most trusted person?

00:28:34

There’s a series of sort of interlocking questions

00:28:36

that you want to be able to ask about a database.

00:28:39

Because you don’t know, like a lot of times,

00:28:41

you might go to Arrowhead and you’re reading a document,

00:28:43

the question that I would have when I go to other places is,

00:28:46

did the head editor of this website, whom I trust,

00:28:51

did they read this thing, and what did they think of it?

00:28:55

Maybe he didn’t read it.

00:28:56

Knowing that the head editor of a book hadn’t actually read the article

00:29:00

would be an important thing to know.

00:29:03

Also, there’s a lot of reasons why things are published on Arrowhead,

00:29:05

and a lot of us, we publish things as historical archives.

00:29:09

You know, it’s a document which someone

00:29:10

wrote 50 years ago, which was

00:29:12

extremely popular, extremely well-known, and it’s something

00:29:14

we’re publishing for that reason, but I wouldn’t necessarily

00:29:16

recommend that you use that for your dosage

00:29:18

guide, you know, for some particular

00:29:20

substance that you’re taking. And so,

00:29:22

without

00:29:22

some ability to express to people who are

00:29:26

reading them that this is

00:29:27

a document that we think you should have access

00:29:29

to, but as far as sort of

00:29:31

trust level or the

00:29:33

rating of the information in it, that’s

00:29:35

about a five.

00:29:37

And you don’t just want to, and you just don’t

00:29:39

mean like, you really don’t want to

00:29:41

get, we’re trying to simplify it into sort of numbers

00:29:43

but the fact is that most of the time

00:29:45

when you’re interested in a document

00:29:46

enough to pay attention to its reliability,

00:29:49

you don’t want just a number.

00:29:50

That might be a way to kind of key into

00:29:52

what that document kind of looks like.

00:29:55

But the simplest possible representation

00:29:58

has to be something like minimum, maximum, average,

00:30:01

you know, kind of thing.

00:30:02

But then immediately you need to be able to look

00:30:04

at all the actual reviews. You need to be able to look

00:30:06

at the comments that have been made and the links that people

00:30:08

have included to sort of refute or

00:30:09

support a document.

00:30:12

And so I don’t want to

00:30:14

just see that Jim gave it a 5. I need to see

00:30:16

that Jim gave it a 5 and

00:30:18

he gave it a 5 for these reasons.

00:30:20

And so what this is

00:30:22

the goal of this thing

00:30:24

is to have a process by which the general public or groups like Arrowhead or other groups can publish information which can go through a type of rigorous peer review where you as a person who’s reading that document can see how much has been reviewed explicitly and you can read the reviews and check the data for yourself.

00:30:46

There needs to be a connection, more and more

00:30:47

connection between the data that

00:30:50

we have access to, the summaries we have access to

00:30:52

and the

00:30:53

things which support or refute

00:30:55

that information.

00:30:58

And documents age.

00:30:59

That’s a huge deal is the aging

00:31:01

of information and that when you’re presented

00:31:03

with, like I might read,

00:31:05

The Dancing Wooly Masters is a book that was one of the early popularized

00:31:11

introductions to some of the quantum mechanics stuff.

00:31:15

But I believe it’s now sort of somewhat out of date.

00:31:17

There are elements of it which are out of date.

00:31:19

When you pick up that book, there’s no connection between that book

00:31:21

and the documents which have sort of changed

00:31:26

the state of knowledge.

00:31:27

And this is

00:31:28

adds to the amount of information

00:31:31

we have access to, and the decisions that we have to make

00:31:34

are dependent on

00:31:36

knowing things which only

00:31:37

experts can know, in a lot of ways.

00:31:40

It becomes

00:31:41

more and more important that we can get access

00:31:43

to

00:31:44

all of the fundamentals of how this game works. it becomes more and more important that we can get access to…

00:31:48

All of the fundamentals of how that’s being written.

00:31:51

All of the fundamentals of how those decisions are being made,

00:31:53

what an expert opinion consists of.

00:31:56

So the next step that I would explain in terms of the system is,

00:32:01

so now you have this concept that you’ve got authors who have ratings, who have trust levels, and documents that have ratings.

00:32:04

But you don’t actually agree with Erwin’s

00:32:06

trust levels. Erwin says that they

00:32:07

trust Sasha a 10, but

00:32:09

everything you’ve ever read about Sasha, that Sasha’s

00:32:11

written, you don’t like, and you don’t want

00:32:13

his rating to be a 10. And so

00:32:15

the system would be such that you could

00:32:17

go in and say, and you could change,

00:32:20

you could set your own trust levels

00:32:21

for authors. So you can go in and say,

00:32:24

I give Sasha just a 5. And so now when you do a search, you can use your own trust levels for authors. So you can go in and say, I give Sasha just a five.

00:32:25

And so now when you do a search,

00:32:27

you can use your own trust level,

00:32:29

your own trust system,

00:32:31

or you can use Arrowhead’s trust system,

00:32:32

or maybe you can use the ACLU’s trust system,

00:32:34

or you could use Sasha’s trust system.

00:32:36

You could pick somebody

00:32:37

who had published their trust system

00:32:38

and use that as the basis

00:32:40

for what it is that you’re going to trust.

00:32:42

What documents are going to come to the top,

00:32:44

what order things are going to appear in,

00:32:46

what ratings they’re going to get at the top of your page.

00:32:49

And so, by virtue of being able to have your own network of trust,

00:32:54

as you sort of develop an interest in an area,

00:32:58

you’ll get to know individual authors or individuals.

00:33:01

Like you have a friend who really knows something.

00:33:03

Look at documents.

00:33:04

If you just wanted to show the stories.

00:33:05

You know, I trust them entirely.

00:33:07

I just, you know, in the example he was giving,

00:33:09

let’s say someone comes into this world

00:33:12

of psychoactive plants and chemicals, and they’re looking up information,

00:33:14

but they know nothing about it. They don’t know any of the authors,

00:33:16

they don’t know any of the publishers, they don’t know anything

00:33:18

about it. But they’ve got a friend who’s the person

00:33:20

who’s sort of introducing them to this world.

00:33:21

They trust their friend. So you can use

00:33:24

your friend’s trust system.

00:33:25

Your friend has already gone through

00:33:26

and tweaked all of their settings

00:33:27

and said, oh, Sasha’s trustworthy,

00:33:29

Joe’s not, this guy is, this guy isn’t.

00:33:32

You can just adopt their trust settings

00:33:34

in order to be able to use that

00:33:36

for how you’re viewing documents.

00:33:38

One of the examples that Mark suggested

00:33:40

when we were discussing this the other day

00:33:41

was that the idea of a parent

00:33:44

who has

00:33:45

when your child is born

00:33:47

you present them with your

00:33:49

trust network

00:33:51

so that the child would see the world like you

00:33:54

do as they’re sort of viewing the documents

00:33:55

they accept

00:33:57

you would give your child this trust

00:34:00

tree or trust network

00:34:01

so that when they go and look at sites

00:34:03

or documents they would be seeing at sites or documents, that they

00:34:06

would be seeing them in the same way that you would be seeing them.

00:34:08

As they develop their own opinion about it,

00:34:10

of course they’re likely to start changing their

00:34:12

trust systems.

00:34:14

So the next step is the transferable, the idea of

00:34:16

transferable trust. It is

00:34:17

incredibly, what has become

00:34:20

really, what’s really palpably present

00:34:22

for people who are interested, it’s very

00:34:24

obvious in the areas of science that there are these problems,

00:34:27

that we have this problem where science is being driven,

00:34:31

not entirely, certainly by no means entirely,

00:34:35

but there is a substantial component of it, which are politics and money.

00:34:40

And the population at large is being sold to these things

00:34:44

on the basis of economic forces,

00:34:46

which are out of the choices of the individuals who consume the information.

00:34:54

And so the design for a system, what needs to happen, I think,

00:34:58

what is extremely important for kind of the intellectual development of the population at large,

00:35:04

is that there are systems developed

00:35:06

which become sort of grassroots level

00:35:09

that individuals can choose to reply to

00:35:12

and rate documents.

00:35:16

You know, like music reviews on places like CNN.

00:35:18

Right.

00:35:19

It’s already happening.

00:35:20

Yeah, so there are a lot of sort of

00:35:22

nascent versions of things like this,

00:35:26

but they tend to be very sort of isolated in their functionality

00:35:29

and not implemented around the sciences.

00:35:33

Well, one of the ways we think about this is just imagine if the entire,

00:35:37

everything on the Internet had one system like this that was all hooked in together

00:35:42

so that everything you read was

00:35:45

linked into everything else that

00:35:47

related to it in a way

00:35:50

so that when you read the original document

00:35:52

you don’t have to wonder if anything else

00:35:53

relates to it. You can find that from there.

00:35:56

It’s like one of the ways that’s sort of

00:35:58

fairly easy to visualize this is

00:35:59

if you were able to go to Google

00:36:02

and put in the URL of any

00:36:04

page and find out all of the reviews and critiques which have been written about that document,

00:36:09

it’s a fairly complicated thing to try and do now.

00:36:12

In fact, what I do when I go and read documents is go and look for people who reference it.

00:36:17

But generally speaking, that’s a sort of fruitless task.

00:36:20

That doesn’t work very well.

00:36:22

task is not a lot. That doesn’t work very well.

00:36:24

Well, in a lot of cases, articles which are written, it is actually to

00:36:26

the author’s great benefit to

00:36:27

not have you find the

00:36:29

reviews and critiques that have been written about it.

00:36:32

And so, they’re going to do everything they can

00:36:33

to not connect, well, in

00:36:35

certain instances.

00:36:37

But they’re going to tell you something, too.

00:36:39

When in cases where they’re more harshly

00:36:41

criticized or less believed by people,

00:36:44

there’s

00:36:44

no reason why they would be helping you

00:36:48

to do that. And so it needs to be a system

00:36:50

outside of the original publisher, because the

00:36:52

original publisher can’t, in general, usually

00:36:54

be trusted to

00:36:55

automatically

00:36:56

provide you with that feedback.

00:37:00

Although my little,

00:37:02

our little

00:37:03

model for this is unique-ish, I think,

00:37:07

I’ve talked to a lot of people about these sorts of things,

00:37:10

this movement towards a democratization of systems of knowledge management,

00:37:16

as often a phrase is used, is pretty common in the open source internet systems development work. I work with a group, open source group,

00:37:27

which was started by Douglas, Doug Engleworth,

00:37:32

who was the inventor of the mouse.

00:37:34

He had this vision in the late 50s and early 60s

00:37:38

of humans as human intellect augmentation devices,

00:37:42

as ways of creating communication systems between people

00:37:45

and within a database of basically sort of, you know, he called it augment and he called

00:37:55

it the open hyper-document system and he called it a number of other things that sort of were

00:38:00

very suggestive of this design that he had, which didn’t ever really get built very well.

00:38:05

He’s more of an idea guy than he is an implementation guy.

00:38:09

But when he invented the mouse, he also,

00:38:12

I don’t know if he came up with it

00:38:14

or if it was already something that existed,

00:38:15

a cording keyboard.

00:38:16

So you use a mouse with one hand

00:38:17

and a cording keyboard with the other

00:38:19

so that you don’t ever move your hands.

00:38:21

And to watch him use the system that they created

00:38:24

is really pretty impressive, whatever,

00:38:26

that he’s able to navigate through documents

00:38:29

in a way that is sort of not really possible,

00:38:31

like you have to move your hands to move the cursor.

00:38:33

Because he had this vision of a document system

00:38:37

where there was no separation between

00:38:41

the way that you view a document

00:38:42

and the content of the document itself,

00:38:44

and it’s sort of kind of complicated to describe.

00:38:46

One of the things that is a critique

00:38:50

of this particular idea that several people have mentioned

00:38:53

is it sounds really complex.

00:38:55

Are people actually going to use it?

00:38:57

Because if nobody uses it, then it’s totally useless.

00:39:01

So obviously the system needs to be set up in such a way

00:39:03

that there isn’t any active using

00:39:05

that needs to take part for the average viewer.

00:39:09

That, you know, within our particular example,

00:39:12

as you’re viewing Arrowhead,

00:39:14

we’ve done the work of rating authors

00:39:16

and of giving trust levels to authors

00:39:18

and of rating documents.

00:39:20

And when you’re viewing it,

00:39:21

you’re viewing it through our trust system,

00:39:23

and so it doesn’t matter.

00:39:24

You don’t have to do anything special

00:39:25

documents are in fact being rated as you view them

00:39:29

without you doing anything

00:39:30

the feedback mechanism is that

00:39:34

let’s say this person writes this article and they give this abstract

00:39:37

and either on the article or the abstract

00:39:39

you could write a response to that

00:39:43

which would say the fact that the results are not statistically significant

00:39:48

eliminates any value to your judgment

00:39:51

about how important this is scientifically

00:39:53

and you would rate this on the

00:39:56

let’s say it’s about chemistry or something

00:39:59

you’d rate it very low

00:40:00

and the people who trusted your opinion

00:40:02

would be more likely to see your response to that

00:40:07

than people who said that they wanted to see

00:40:10

your opinion about things

00:40:12

would see your opinion sort of near the top

00:40:14

of the reviews that happen.

00:40:16

And from the document itself,

00:40:18

if it’s published within this system

00:40:20

or is linked to within this system,

00:40:24

people who go to that document can see your response to it

00:40:27

as an integral part of the system.

00:40:29

And also, the other step as far as your comment that

00:40:34

but you have your own biases as well,

00:40:36

is that part of the system would be that people who are viewing it,

00:40:40

people who view your response to that article, say,

00:40:42

can go in and could very likely actually see your trust settings,

00:40:47

so they could actually see, and your rating,

00:40:50

so that they can see that you rate this particular author as a very low trust,

00:40:53

so that it’s more open, it’s more apparent what your bias is.

00:40:57

Your biases are more explicit in this system than they are without a system like this.

00:41:02

So, in the

00:41:05

within, like,

00:41:07

imagine that this system was an open source piece of software

00:41:10

you could go and simply just plug into your web server

00:41:12

and use. You could

00:41:13

choose whether or not the

00:41:15

ratings, the rating trees and things like that were published

00:41:18

or not. And so if you were a person, an editor,

00:41:19

who chose not to display

00:41:22

the relationships or the trust

00:41:23

between the people, or didn’t

00:41:25

allow

00:41:26

authors to be trusted, but only

00:41:29

allowed papers to be

00:41:32

trusted, you could make that

00:41:33

choice. I think that

00:41:35

there’s a lot of reasons

00:41:37

why you don’t necessarily always want to be

00:41:39

explicit about trust between individuals,

00:41:41

which can be very extremely contentious

00:41:43

and sort of cause a lot of problems.

00:41:46

But it is sort of like the kind of, I don’t know, California thing or whatever sort of

00:41:53

psychotherapy thing.

00:41:54

Don’t criticize the person, criticize the behavior.

00:41:57

Don’t say, I hate you.

00:41:58

You say, I hate the way you smash my stuff.

00:42:03

And so I think that there is…

00:42:05

I hate the way you talk about

00:42:06

end-to-end neurotoxicity.

00:42:06

I hate the way you talk about

00:42:07

end-to-end neurotoxicity.

00:42:09

But I love you guys.

00:42:11

I love you.

00:42:12

I love you.

00:42:15

Another sort of potential

00:42:17

sort of problem with…

00:42:20

One of the sort of

00:42:21

theoretical problems,

00:42:23

which I’m not sure

00:42:23

that this system might address,

00:42:25

is the problem that people talk about in terms of whirlpools or eddies or cloistering of opinion,

00:42:33

where even though we’re exposed, like in the Internet, we’re exposed to more and more of the world,

00:42:39

you have the potential to be exposed to more and more of the world.

00:42:42

There is sort of a series of articles I’ve read

00:42:46

discussing the fact that it might be possible

00:42:49

that the Internet as it is now

00:42:50

would actually cause people to be exposed to less

00:42:53

than they normally are,

00:42:55

because, like, for instance,

00:42:56

I’m some weird, crazy mushroom psychedelic head guy,

00:43:00

and before in my world,

00:43:02

there weren’t enough of those around in my local community.

00:43:05

So I had to interact with people who did not share my opinion of mushrooms.

00:43:10

It’s possible now on the Internet, this is the claim that’s made,

00:43:13

that I’m the greatest psychedelic mushroom head.

00:43:15

And now I can find enough people in the world who share all of my exact opinions.

00:43:19

You don’t talk to anyone else.

00:43:20

I don’t have to talk to anybody else.

00:43:21

I don’t ever have to go outside of my comfort zone.

00:43:24

And so there is the potential that a system of trust networking

00:43:29

could actually sort of exacerbate that problem.

00:43:32

The reason that I think that that’s not true

00:43:33

is that I think that what we have now

00:43:38

is that people don’t get exposed to the ideas

00:43:41

of the people that they do in fact trust

00:43:43

because of the people that they do, in fact, trust. Because of the…

00:43:48

Like, I’m happy to share with my mother my views of mushrooms.

00:43:51

I tried and failed.

00:43:54

But if she were going to go and search my site,

00:44:00

or look at a series of documents, or look at the web,

00:44:02

and said, well, I wonder how my son thinks of that,

00:44:05

and goes and selects me from her list of people

00:44:07

that she wants to be able to look at the world through their trust eyes,

00:44:10

and then goes and searches the documents,

00:44:12

and finds the things that I believe in,

00:44:13

and finds documents which I’ve rated high through the review system.

00:44:18

She can’t do that right now.

00:44:19

She can’t do that right now.

00:44:20

It is extremely difficult for her to find out

00:44:22

which documents on the net, or even on my site, I think are really true and accurate and reliable and which documents

00:44:29

are just sort of weird noise that are kind of interesting for a variety of other reasons.

00:44:34

And so I think that it’s quite possible that this type of system could actually break open

00:44:39

some of the editing that can happen.

00:44:41

So it doesn’t really seem like a problem with your system,

00:44:45

but it seems like if you were an author

00:44:47

and you wanted yourself to be treated fairly,

00:44:50

you really would have to go find

00:44:52

every single specific instance

00:44:54

where someone used your thing

00:44:56

and make a comment on each one.

00:45:00

Right, so there’s definitely the problem

00:45:02

that single documents can be published

00:45:04

in multiple places. Is that sort of the problem that single documents can be published in multiple places.

00:45:05

Is that sort of the problem you’re presenting?

00:45:08

Like one paper could show up a thousand different places on the net.

00:45:11

Is that sort of where you’re at?

00:45:12

Yeah.

00:45:12

Or they could have refuted it a thousand times.

00:45:14

Yeah.

00:45:15

That’s what he’s talking about.

00:45:16

Right, so within…

00:45:17

Right, so that is potentially an implementation problem.

00:45:22

That you have…

00:45:24

Assuming that you don’t have just one rule database, which we That you have, assuming that you don’t have

00:45:26

just one rule database,

00:45:27

you can have a document.

00:45:31

I mean, there’s a lot of documents

00:45:32

in the network

00:45:32

or a thousand different places.

00:45:34

And I’m not exactly sure

00:45:36

what sort of…

00:45:37

So that’s an issue, obviously,

00:45:39

within the current system as well.

00:45:41

You write something

00:45:42

and hundreds of people

00:45:44

write critiques of it

00:45:44

in various places

00:45:45

that you don’t even know of.

00:45:48

Within the system, at least theoretically,

00:45:49

it would be easier to find those responses,

00:45:51

so it would be easier to provide

00:45:53

secondary responses to people who

00:45:55

critique your original work. Whether or not you want to

00:45:57

do that is obviously

00:45:58

how much you want to be arguing that point with everybody

00:46:01

who cares. I think that’s an interesting problem.

00:46:04

Obviously in an ideal

00:46:05

worldwide system, that would be the case.

00:46:08

Duplications of documents

00:46:09

would somehow have the same identifier

00:46:11

and you’d be able to find out where they are.

00:46:12

There’s a thing called an MV5 you can do on

00:46:15

anything.

00:46:18

Any data.

00:46:19

There’s an algorithm which is believed

00:46:22

I don’t think it’s proven, I’m not sure

00:46:23

whether it’s been proven or not,

00:46:26

but is believed to, when you run

00:46:28

this algorithm on a piece of information,

00:46:30

either binary or text,

00:46:32

it creates a

00:46:33

very short, unique

00:46:36

key, which

00:46:37

no, which is

00:46:39

guaranteed by this algorithm to

00:46:42

never…

00:46:43

create the same key for a different set of bits in

00:46:48

a row.

00:46:49

So that you can use that to identify a document uniquely.

00:46:53

The main problem with that system is that if you change one comma, it’s a totally different

00:46:57

document.

00:46:57

So there’s no sort of smart brains about this thing so that one space gets changed or something that’s totally…

00:47:06

Well, in the world of the web, obviously,

00:47:08

you change the background

00:47:09

color or the formatting of the page

00:47:11

and you’re done.

00:47:14

Let’s imagine

00:47:15

just in a simple example,

00:47:17

what exactly the ratings look like

00:47:19

could vary greatly, but

00:47:21

imagine that everything defaulted

00:47:23

to a 5.

00:47:26

You could rate something a 10, and that meant that you trusted

00:47:28

it, or you could rate it a 1, which meant you didn’t trust

00:47:30

it, and a 5 means, or

00:47:32

a 0 means you haven’t read it, you haven’t

00:47:34

rated it. So you don’t have to have read everything,

00:47:36

only the things that you have read and

00:47:38

rated affect

00:47:39

the system.

00:47:44

I’m looking at this as sort of just a complete win

00:47:46

in terms of my management of Error.

00:47:48

The difficulty I have with even remembering

00:47:51

the last time I read this document.

00:47:53

Did I check that law page recently?

00:47:55

It’s been edited recently, but what’s the…

00:47:58

Who edited it?

00:47:59

Who edited it and why?

00:48:01

There’s a lot of simple things that can be done on our system

00:48:05

that aren’t grassroots peer review which can improve this,

00:48:09

but this system, as a way for us to manage documents,

00:48:13

is extremely good, and it becomes good immediately.

00:48:15

Basically, it’s the first document that this happens with.

00:48:17

An example of a document that’s currently going through our informal system

00:48:22

is a reputation of an article written by William White about DXM neurotoxicity. There’s this thing called

00:48:32

Only’s Lesions and Bill White took a lot of DXM and wrote the DXM Fact, which is a very

00:48:39

popular document on the internet. And it’s a very exhaustive document. It’s very long. It’s a well-researched,

00:48:45

well-written article.

00:48:47

Somewhere after Bill stopped

00:48:49

taking DXM, I think he

00:48:51

felt like he had wondered if he had

00:48:53

damaged himself and started looking for

00:48:55

articles that would support, you know, looking

00:48:57

at the issue of neurotoxicity of DXM.

00:48:59

And he found some concerning stuff. And he published it

00:49:01

under 0.1.

00:49:03

The bad news is finally in or something like that. 0.1, the bad news is finally in, or something like that, 0.1

00:49:08

is the version number he gives them on the document, which suggests extremely preliminary.

00:49:14

But because it was the only document available on the net for a long time about that subject,

00:49:19

many, many people took that document to be sort of a fact.

00:49:23

And so there was this huge long thread,

00:49:25

hundreds and hundreds and hundreds of discussions

00:49:28

on web boards and on Usenet and all over,

00:49:30

and in person about this issue

00:49:32

of whether DXM causes all these lesions.

00:49:35

We’re currently trying to, I read that document,

00:49:38

I don’t really know the science.

00:49:39

I can’t really evaluate whether the things

00:49:41

that he’s saying or referring to are relevant to DXM use. There’s an author who’s written a refutation of that thing. He’s talked to

00:49:50

Dr. Olney, who’s named after it. Sounds like a lovely thing to be named. It could be your

00:49:56

namesake. And it’s so easy to lose a document, lose a reputation,

00:50:06

even if it gets published on Arrowhead,

00:50:07

there isn’t any sort of natural connection

00:50:11

between a document and a reputation

00:50:14

and the response that’s been written.

00:50:16

And so, like, in the simplest form,

00:50:20

the next time when you do a search

00:50:22

on brain damage in DXM,

00:50:24

when you pull up this document,

00:50:26

let’s say we have this very simplest implementation of the graph-based peer review system.

00:50:30

The next time you go to Arrowit and you do a search and you pull up, you know, you search

00:50:34

on VXM and brain damage, because you’ve heard of that, you pull up this document by Will

00:50:38

White that says 0.1 on it.

00:50:39

And over in the corner it says, you know, it shows you that two people have rated that

00:50:44

document very

00:50:45

low and that there are three responses to it.

00:50:47

And you go and you look at the responses and one of them is an extremely detailed critique

00:50:52

of it which is rated very high, which is given a very high rating.

00:50:55

And so now you have this very quick view by which you can see that the original document

00:51:00

that you’re looking at that claims the brain damage is not trusted by error, because at this point you haven’t set up

00:51:06

your own trust ratings,

00:51:07

and that the reputation of it is rated very high.

00:51:11

Did I say that right?

00:51:12

Low or high?

00:51:13

Yeah.

00:51:14

Good.

00:51:15

Does that make sense?

00:51:16

I think that maybe part of the confusion,

00:51:18

let me reword it one way,

00:51:20

is that you’re not expecting

00:51:23

that Sasha has read all thousand documents on a particular

00:51:26

topic and rated them all from one to ten so that you can see what he thinks is the most

00:51:31

important paper on that topic. You’re looking at what twenty Sashas have rated these thousand

00:51:39

documents, and what their trust ratings are, and let’s say there’s a multiplier. So Sasha has a 10 trust rating,

00:51:47

and he gives a document a 10 or a 5.

00:51:49

If he gives a paper a 10,

00:51:51

let’s say that his impact on that document is 100.

00:51:55

He’s multiplied his 10 rating by a 10 rating of the document

00:51:57

and given it a 100.

00:52:00

So he rates something a 5,

00:52:02

his 10 rating multiplies by the 5 rating of the document,

00:52:04

and that gets a 50.

00:52:07

Now, the second reviewer that only has a rating of an 8 rates that first document a 5.

00:52:15

They get a 40 added on to his 100, so you get 140.

00:52:19

And so you’re combining multiple people’s trust ratings and their ratings of documents to give, in a simple example, an overall score to the document.

00:52:28

Does that sort of explain it a little better?

00:52:31

I’m not sure that…

00:52:32

You divide it by the number of people, so you get…

00:52:35

Right, I’m not sure that particular algorithm is going to work.

00:52:38

Well, sure, I’m not saying that it’s a really simple example of how that could work.

00:52:42

So, as a trivial example of a system that I was really enamored of, and it’s sort of

00:52:48

a little bit worse than it used to be, there was a programming language called PHP, and

00:52:54

they were one of the first people who did the documentation for their programming language

00:52:59

where you had reader comments attached to the documentation of the programming language

00:53:03

itself.

00:53:04

And the idea with it, as I understood it, was that you could put your comment about,

00:53:09

so you read the documentation, you’re like, I don’t understand this,

00:53:12

and you weren’t supposed to ask questions there,

00:53:15

but if you figured out what it meant and you had a better way to explain it,

00:53:18

you could type, you could put in your explanation there.

00:53:21

And the next time they did a revision of that page,

00:53:24

they could go through and incorporate all of

00:53:26

the better suggestions for that document.

00:53:28

So you can imagine a document revision

00:53:30

where you’re the author

00:53:32

of this, like Bill White’s the author of this

00:53:33

DXM thing,

00:53:34

this only legions thing. The next

00:53:37

time he wants to do a revision of that, he can go and

00:53:39

look at that document and see all of the

00:53:42

reputation for that, and rewrite

00:53:44

a new document to say this is version

00:53:45

2.0 or whatever.

00:53:47

And address the commentary.

00:53:50

And even have links to the

00:53:51

commentary and say here’s what I think of this and here’s what I think

00:53:54

of that as part

00:53:55

of that document.

00:53:57

That’s truly interactive.

00:54:00

I sort of got two questions.

00:54:01

I hope you didn’t already address them while I was gone

00:54:03

and so I just say you already addressed it

00:54:06

a time later.

00:54:08

The first is related to the integrity of data.

00:54:11

If there is a version 1, a version 2, a version 3,

00:54:14

are the older versions archived

00:54:17

so that people can access those,

00:54:19

or are we sort of orwellingly rewriting history

00:54:22

and just providing the newest and the latest?

00:54:24

Yeah, basically we eliminate all old knowledge.

00:54:27

No, no, that’s absolutely not.

00:54:29

You need to be able to look at every single published revision of a document.

00:54:34

Because every, in my opinion, this isn’t my own novel opinion, this is the opinion of

00:54:40

a fairly substantial number of people. Every document and every document view

00:54:46

should be addressable by URL

00:54:47

and it should be a static URL that never, ever, ever

00:54:50

changes. So that when, if someone

00:54:52

writes a page

00:54:54

on a particular place

00:54:55

and they give a URL, that thing

00:54:58

is assigned some kind of a unique identifier

00:54:59

currently used as URL.

00:55:02

That 30 years from now

00:55:04

you’re going to type that URL in

00:55:06

and you can find that goddamn document.

00:55:08

Because what’s happening now on the internet is that most people,

00:55:12

even archive managers, pay no attention to this whatsoever.

00:55:15

And every time, oh my god, we spend so much time updating links, it’s just insane.

00:55:20

You know, large sites that we’ve got a thousand links to change their systems.

00:55:24

And every link breaks and you have to go in thousand links to, change their system. And every link breaks,

00:55:25

and you have to go in and research

00:55:27

to try to refine.

00:55:28

They don’t have no effort at all

00:55:29

in order to forward you to the new location.

00:55:31

And so this is an example

00:55:32

of how the Internet is way worse than paper.

00:55:35

Without the staticness of URLs or addressability,

00:55:39

you cannot rely,

00:55:41

and you cannot really have a long-term discussion

00:55:43

about these things.

00:55:44

The DEA recently totally changed their site organization organization and they broke every single link into them.

00:55:49

And so we were assuming that we direct people to their site.

00:55:54

Here, if you want to look up this information, you can go to the DEA.

00:55:56

You can’t.

00:55:56

You go to a 404 page and you’re done.

00:55:59

It’s horrible.

00:56:00

So you do have version…

00:56:03

Version, yeah, version.

00:56:09

One of the issues with that, obviously, is storage space.

00:56:12

Just that as, especially if you’re doing minor revisions,

00:56:16

the question is, so if I change a comma or I fix a spelling error,

00:56:17

is that a new version?

00:56:19

Does anybody care?

00:56:20

Probably.

00:56:21

I mean, in the long term, people do care.

00:56:25

You want to know what the original publication that was referenced in this book,

00:56:26

what exactly it said.

00:56:31

And so, like, with documents or with text, with simple plain text,

00:56:33

it’s fairly trivial to do differences between things.

00:56:37

So if you only change a comment, it only takes a few bytes to store that difference.

00:56:40

With an image or some kind of a chart or something like that,

00:56:45

it’s really, really, really hard to represent the differences between those things.

00:56:48

And there isn’t any good, really simple software solution for doing that.

00:56:56

So the other question I would have related to that is, how does the rating system work into, I’ve got blah, blah, blah, 2.0, and Sasha’s rated it, and Jonathan’s rated it,

00:57:04

and some other people,

00:57:06

and now I come out with blah blah blah

00:57:07

2.5, and

00:57:10

none of those people have rated

00:57:11

the new document, or

00:57:14

maybe I do get a couple ratings on the new document

00:57:16

by different people.

00:57:18

That’s not something we’ve explicitly talked about,

00:57:20

but what I would imagine being the case with that

00:57:22

is that version 2.5

00:57:23

is linked very closely to version 2.0

00:57:26

and that you would probably be able to see

00:57:28

the ratings of previous versions

00:57:30

although they would have to be clearly labeled as having been

00:57:31

ratings of previous and responses

00:57:34

to previous versions because things may have

00:57:36

changed.

00:57:36

I haven’t thought about that before. That’s a really good

00:57:40

question.

00:57:41

A lot of the issue with the use

00:57:44

of this with any software or any system,

00:57:46

the really nice thing about books

00:57:48

is that they have an extremely good user interface.

00:57:50

You can grab the book, pick it up,

00:57:53

you open the pages.

00:57:54

With a rating system that includes

00:57:57

lots of reviews and links and numbers

00:58:00

and things like that,

00:58:01

if the user interface sucks,

00:58:03

it just won’t get used.

00:58:04

And so a lot of the issue with complicated versioning stuff

00:58:09

ends up being how can you get that into an interface

00:58:13

that is sensible, obvious, and usable,

00:58:16

and isn’t just totally clever,

00:58:18

and you’re ending up spending all your time

00:58:20

looking at interface instead of reading a document.

00:58:23

Part of the reason that I wanted to give this talk,

00:58:26

what isn’t necessarily just to present

00:58:27

a software solution idea that we’re working on,

00:58:30

but is to present the general idea of knowledge management

00:58:34

as an integral part of the human evolution of knowledge.

00:58:38

That there’s this, going back to what Mark was talking about,

00:58:41

about the acceleration that is happening

00:58:45

around turning the world more and more into language

00:58:48

or manipulable symbols,

00:58:51

that the technology that we’re having available to us

00:58:53

is, I think, growing at a rate

00:58:58

which is much faster than our ability to understand it.

00:59:01

And part of that understanding

00:59:02

is an issue of getting systems of feedback,

00:59:06

so that ideas, means can develop and evolve

00:59:12

at a rate appropriate to the technology,

00:59:18

to the guns and science that were,

00:59:20

and the psychoactive drugs, is sort of our topic.

00:59:23

There are more and more psychoactive drugs available.

00:59:26

There’s, you know, every day, you know,

00:59:28

there’s a pharmaceutical company

00:59:30

developing as quickly as possible things to sell us

00:59:32

that will change how we think and how we feel.

00:59:34

And there are things that we go to a party

00:59:36

and we hear some new substance.

00:59:39

That is happening at a rate which is beyond our ability

00:59:43

to really have a live sense

00:59:45

that we can track the information and trust the information that we get about those things.

00:59:50

Thank you.

01:00:02

Well, there certainly were some thought-provoking ideas in that presentation.

01:00:07

And I suspect that some of you probably have ideas of your own about this topic.

01:00:13

In order to keep this conversation going, I’ll soon be opening up a wiki section of our Psychedelic Salon

01:00:19

where you can go and discuss each of these podcasts online in about as user-friendly a format as I know that’s available right now.

01:00:28

So hold on to those thoughts, or better yet, make a few notes so you’ll be ready to add your ideas to our WikiChill space later this summer.

01:00:37

And for those of you who will be at the Burning Man Festival this year,

01:00:41

Earth and Fire will also be participating in a panel there

01:00:45

that John Hanna is moderating as part of the Planque Norte lectures.

01:00:50

It will be located in Theon Village this year,

01:00:54

and John’s panel will take place around 1.30 on Wednesday afternoon.

01:00:58

And the theme of this panel, by the way,

01:01:00

will be Harm Reduction to Bolster Hope and Banish Fear.

01:01:04

The Future Favors the Prepared. by the way will be harm reduction to bolster hope and banish fear the future favors the prepared

01:01:07

one thing that i’d like to add to the talk you just heard as a sort of a historical note and

01:01:15

that concerns the mdma article by ricardi that earth referred to you all remember ricardi don’t

01:01:23

you he’s the bushunter’s fake scientist.

01:01:26

At least that’s what I call that lying bastard. A few months after this talk was given, there

01:01:32

was a major scandal in the scientific community regarding Riccardi’s article. In fact, the

01:01:38

uber peer-reviewed journal Science even had to retract that article when Riccardi’s fraud was exposed by a scientific journal in the UK.

01:01:48

So, way to go, you Brits.

01:01:50

You know, somebody’s got to keep our scientists on us over here.

01:01:54

Apparently, none of the science magazine’s peer reviews noticed the fact that if Riccardi’s data was correct,

01:02:02

then there would have been dozens of MDMA deaths in London alone every weekend.

01:02:07

Yeah, that lousy little bastard Riccardi killed those monkeys with speed, and then he claimed

01:02:11

he’d been testing them with MDMA.

01:02:14

The bottom line here, I think, is that the information about MDMA on the Arrowwood site

01:02:19

is far more accurate than the information you’re going to find in mainstream journals

01:02:24

like Science.

01:02:26

And what, you ask, has happened to the disgraced Riccardi?

01:02:29

Well, the doctor who doctored the results of his research

01:02:33

to fit what the screwheads in Washington wanted him to find is still at it.

01:02:38

You know, you’d think he’d be pulling the feathers out of the tar he was covered with

01:02:42

when he was run out of town, but not so, bucko.

01:02:46

The Bush crime family is still giving him huge grants, propping him up, hoping to make more bogus science about MDMA.

01:02:55

So be warned the next time you hear MTV or Oprah or some other mainstream shill try to tell you that using MDMA will burn holes in your brain.

01:03:05

They’re lying and they know it.

01:03:07

They know they’re lying.

01:03:09

They just hope that you’ll be too lazy to search out the truth about these matters on your own.

01:03:14

Which is all the more reason to begin each of your inquiries about psychoactive substances at Arrowwood.org.

01:03:22

And if you can find a way to help the good folks at Arrowwood

01:03:25

to get this information out,

01:03:27

well, that would be most appreciated

01:03:29

by the entire psychedelic community, I’m sure.

01:03:32

Last year, over 2.5 million people

01:03:35

used Arrowwood as their trusted resource for this information.

01:03:39

And that entire project is supported by only about 1,300 members.

01:03:43

So even if you can only afford

01:03:45

to send them 25 bucks a year you know it’ll be greatly appreciated I’m sure

01:03:50

well I guess I’d better let you go for now it’s good to be with you here again

01:03:56

in the psychedelic salon and I hope to see you back here again next week so

01:04:01

thanks for stopping by I really really appreciate it. And thanks again to

01:04:05

Earth and Fire and the rest of the team at Arrowhead.org for everything they’re doing

01:04:11

to keep this information flowing freely. And to John Hanna and Kevin Whitesides, hey guys,

01:04:18

thanks for making it possible to share this talk with our friends here in the Psychedelic Salon. And Shatil Hayuk, thanks again for the music.

01:04:27

For now, this is Lorenzo signing off from Cyberdelic Space.

01:04:32

Be well, my friends. Thank you.