#84 Landscapes of Mind A Conversation with Kevin Kelly

 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  5635
Joined  22-02-2005
 
 
 
01 July 2017 00:35
 

In this episode of the Waking Up podcast, Sam Harris speaks with Kevin Kelly about why it’s so hard to predict future technology, the nature of intelligence, the “singularity,” artificial consciousness, and other topics.

#84 Landscapes of Mind A Conversation with Kevin Kelly

This thread is for listeners’ comments.

 
Saint Ralph
 
Avatar
 
 
Saint Ralph
Total Posts:  14
Joined  16-02-2017
 
 
 
01 July 2017 10:30
 

This was a very good discussion, one that I will probably re-listen to, but . . .

You’re both fatally naive in your assumption of an homogeneous “we” that will decide what to do with AI, what direction to go and what precautions to take.  You realize, don’t you, that when it comes to programming principles into AI entities, there will be those who will insist that they be programmed only to serve Allah and others who will insist that Puritanical “American” values dominate all aspects of an AI’s goal-seeking behavior? 

I chuckled when Kevin suggested that an AI might decide to put your children out with the trash based on your persistent neglect of them, but that made me realize that we cannot use our actions or our words to teach AIs what we want.  What we want and what we should want—what is best for us—are too often two completely different things.  Practically all of us are mired in addiction, hypocrisy and delusion to one degree or another, and even that wouldn’t be insurmountable if we weren’t, each of us, mired in different addictions, hypocrisies and delusions.  What your discussion illustrated is how difficult future decisions concerning AI will be for rational educated people with the best of intentions.  Can you imagine the Trump administration tackling these questions?  All of the “decisions” made about AI will end up being made by market forces because the Market is the only “we” there is that is capable of deciding anything, intelligently or otherwise.  “We,” as a people or a civilization or a species cannot decide what we want, collectively or individually.

Kevin is way too optimistic about retraining displaced workers to do other jobs.  There is a hierarchy to traditional manufacturing where you have a few people inventing and developing then a few more people managing and directing and then the “boots on the ground” workforce actually making the stuff.  In a traditional manufacturing environment the workforce far outnumbers everyone else.  Trying to retrain everyone displaced by a robot to be an inventor or a manager or an entrepreneur isn’t going to work.  It’s already failing.  I think a lot of Trump’s support came from people trying to make ends meet by working two and three McJobs in lieu of their real job that was moved offshore somewhere.  That monolithic sagacious “we” that I’ve never seen any evidence of in the United States is going to have to come up with an entirely different system.  Jokes that aren’t funny to us might be all we get.

[ Edited: 01 July 2017 10:35 by Saint Ralph]
 
 
ZZYZX
 
Avatar
 
 
ZZYZX
Total Posts:  19
Joined  04-06-2017
 
 
 
01 July 2017 17:19
 

This reminded me of watching REAL TIME live where I couldn’t fast forward past the talking panel to get to Bill Maher’s parts.
With the podcast, even though it’s recorded, I often must suffer through the boring guests so I don’t miss what Sam has to say. Oh well, it’s a small price to pay.

Sam was great in pointing out how the modern human has been defeating the will of his genes to simply multiply, yet that multiplication is the problem that underlies much of all the other problems we face. Both talkers in this podcast are way out in left field when talking about a guaranteed global income for all.

Darwinism may be down but it’s not out. The herd is going to get thinned one of these days.

 
DuncanMcC
 
Avatar
 
 
DuncanMcC
Total Posts:  2
Joined  01-07-2017
 
 
 
01 July 2017 18:39
 

You’ve probably listened to a previous podcast by Sam with Yuval Noah Harari, author of Sapiens: A Brief History of Humankind.  It’s an awesome book.  Did you know he’s put another book out - Homo Deus: A Brief History of Tomorrow.
Similar material to this podcast.  Another awesome read by Harari (I haven’t finished it yet, about 3/4 way through).
It also covers where technology is heading in the future, where intelligence is heading in the future (AI) and if there is, or even needs, to be a place for consciousness.

A fascinating read.

 
CharlieSC
 
Avatar
 
 
CharlieSC
Total Posts:  1
Joined  01-07-2017
 
 
 
01 July 2017 19:33
 

This was just reported in the Washington Post:
“By 2030, Dubai plans for robots to make up 25 percent of its police force.”

And what about military use of AI, not just by U.S., but by everyone?

And talk about job loss, what about retail?

The dangers of AI are not in the future, but now.

 
Tempter Yortman
 
Avatar
 
 
Tempter Yortman
Total Posts:  2
Joined  25-06-2017
 
 
 
02 July 2017 14:30
 

An example of the “most powerful autistic system the universe has ever devised” might have been prophesied with HAL 9000 in 1968. Sam seems to be suggesting that we don’t have the capacity to quantify human qualities sufficiently to prevent the worst from coming to the worst, as it did in the fictional A Space Odyssey. Discreetly programming all the parts of the human psyche will not create a “whole” human or person. The miracle of consciousness (until explained satisfactorily I dare call it a miracle) will be the Achille’s Heel of AI. We do not understand ourselves well enough to program AI into the realm of personhood, which would be the only thing that could conceivably safeguard our species against the master we might be creating

 
Saint Ralph
 
Avatar
 
 
Saint Ralph
Total Posts:  14
Joined  16-02-2017
 
 
 
02 July 2017 15:09
 
Tempter Yortman - 02 July 2017 02:30 PM

An example of the “most powerful autistic system the universe has ever devised” might have been prophesied with HAL 9000 in 1968. Sam seems to be suggesting that we don’t have the capacity to quantify human qualities sufficiently to prevent the worst from coming to the worst, as it did in the fictional A Space Odyssey. Discreetly programming all the parts of the human psyche will not create a “whole” human or person. The miracle of consciousness (until explained satisfactorily I dare call it a miracle) will be the Achille’s Heel of AI. We do not understand ourselves well enough to program AI into the realm of personhood, which would be the only thing that could conceivably safeguard our species against the master we might be creating

That’s a great point.  We could end up making AI in our image, like when we made God in our image back when what we aspired to was unbridled Bronze Age despotism.  We’ve been tap dancing around that ever since.  We could really regret making AI in our own image.  That might very well be the most dangerous damn thing we could do.

 
 
DuncanMcC
 
Avatar
 
 
DuncanMcC
Total Posts:  2
Joined  01-07-2017
 
 
 
02 July 2017 15:43
 

AI requires “I”.  Does it require consciousness at all?

 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2332
Joined  19-01-2015
 
 
 
03 July 2017 00:09
 

I like Kevin Kelly a lot an am glad Sam Harris had him on.

that being said, I had wished for Harris to actually let him speak! The vast majority of the podcast was about Sam explaining his position on A.I. without giving Kevin the chance to say much at all beyond “I mostly agree”: he definitely knows more about the subject than Sam but didn’t really get the time to speak freely.

A couple of points about Sam’s worries about the “alignment problem”: in a way Sam makes the same mistakes are Fundamentalists in that he demands the right to refuse progress, even if it is demonstrably beneficial. Like Kevin said (in a fashion): if a program has a record of helping us, we can trust it to probably not harm us in the future.
The biggest gain for humans will be A.I. assistants who will do for us in day-to-day life what chess-programs doe for human players in hybrid competitions: warn us if we are about to make mistakes. programs that grew up with us will know us better than we know ourselves, and it would be foolish to ignore its advice, especially in emotional circumstances

The idea that we first have to program in some values fails on two levels: 1: we don’t know our own values in a way that could be stringently translated in code, which shows that morals are highly subjective and fluid. We can#t blame machines for not acting moral if we can’t express what that means.
and
2. we want the machines to make us more moral than we currently are! we want them to help us lead better lives for us and others. Putting them in a straight-jacket of outdated morals will hinder their ability to assist our moral progress.


It is clear that early adopters of human-A.I. hybrid technology will vastly outperform their un-augmented compatriots. The persons who can afford access to the best info to train their A.I. will do better than everyone else, but even at only Wikipedia-level, A.I.s will help us make much more of our lives than we currently can.


Concerning A.I. and conciousness:  In my opinion, this is an almost entirely irrelevant side-issue: if we live with non-concious machines that our Theory of Mind brain can’t call out on their artificialness, then they server just as well as other humans to train our social skills and interactions: that they are not “real” doesn’t matter for our mind, and it certainly doesn’t for theirs.
If we are concerned with the morals of (partially) concious beings, we should first stop eating most things we currently do before we start to worry about machines: we can always make them happy by uploading the right code.

 
 
Selfish Jeans
 
Avatar
 
 
Selfish Jeans
Total Posts:  4
Joined  26-05-2017
 
 
 
03 July 2017 10:47
 

As with most of these podcasts, I enjoyed it very much.  I did not see the curve ball coming at the end.  Apparently this guy believes in God.  I am completely confused, as this does not conform with Kelly’s seemingly evidence-based approach on life.  For instance, with respect to his view on universal minimum income, he thinks we should set policy based on data acquired from previous experiences.  The ending was definitely a cliff hanger.  He most definitely needs to be invited back.

Also, it seems that the end of the podcast was cut off.  Right at the end Kelly unleashes a litany of flattering comments, without any response from Sam.  What’s up with that?

 
mjl123
 
Avatar
 
 
mjl123
Total Posts:  1
Joined  07-07-2017
 
 
 
07 July 2017 12:38
 

Interesting discussion, but Kevin Kelly appears to fall victim to the globalist, and perhaps somewhat elite, view that displaced workers will be retrained. It begs the question, retrained for what? AI will eventually be capable of doing ALL jobs. Which ones will be left? Fairly rudimentary AI is already conducting customer service, driving cars, performing surgery, summarizing data, writing reports, making complex decisions, and winning Jeapordy.
Regarding Alignment, I agree with another poster.  I believe that there is, unfortunately, no cash value in ethics, and that if ethical behavior is programmed into a system it willl be a marketing decision; not a decision made by society or a benevolent few.
For a cognitive view of intelligence, I suggest Robert Sternberg’s Triarchic Theory of Intelligence. He might be a good guest on the topic.

 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  5635
Joined  22-02-2005
 
 
 
07 July 2017 21:09
 

How well one can imagine future technology depends on how well current technology is understood. Such discussions should include regular reality checks from factory authorized service personnel.

Any electronic technician who has worked in the repair field would tell you that if there were a way to make a frustrating and uncooperative gizmo feel pain, we would have found it and used it long ago. If there is some kind of life experience in electronics then there always has been and in some manner that can never know. Bringing a program to life sounds like bringing a script to life. It needs actors. I’m not sure solenoids and motors add up to an actor. They are extensions of human actors.

It is hard to imagine a computer that cannot be shut-off. Even a computer with an enclosed and self-contained nuclear power source is still vulnerable to an imposed shut-down. Block its heat exhaust until its thermal protection trips then, in with the screwdrivers.

A computer can be easily programmed to think like a human. Give it a set of parameters that define ‘happiness’ and instruct it to cease all calculating and accept any calculation in progress whenever a spike in happiness value occurs. This will fill the hard drive with malarkey until it is indistinguishable from a human mind.

 
dust
 
Avatar
 
 
dust
Total Posts:  3
Joined  06-01-2017
 
 
 
09 July 2017 05:33
 

A small point but I just wanted to put this out there: Kelly claims that the major factor in free digital downloads of music etc at the beginning was not the new ease of stealing music—being able to cheat music companies of profit—but the ability to ‘remix’, or to pick and mix one’s favourite tracks. That the economic freedom was just a bonus.

Without much doubt in my mind, this was not the case for most people. As a teen near the beginning of that season, an avid Napster user (and then whatever came next, Limewire Soulseek torrents etc), the initial 3 major factors were: (1) being able to get anything I wanted for free (not just cheap bootlegs, but completely free); (2) the new ability to discover new music without economic or corporate constraint (for example, radio stations and MTV mostly played record company-approved, non-explicit, mainstream music, but we wanted uncensored rock and hip-hop, independent and mainstream artists, etc, and we didn’t want to pay for all of it); and (3) the fact that it all fit on a little hard drive. I discovered previously unimaginable amounts of music in those days, and though I did buy a lot of CDs too, it was the ability to discover and hoard that drew me and everyone I knew. The ‘remixing’ was simply a natural progression. It was not a new phenomenon, either, but the same kind of thing people did with mixtapes in the years before. In the 1970s-90s, for example, if people had been able to steal thousands upon thousands of free tapes or CDs with almost no risk risk of being caught and arrested, and store them in a tiny space, many would have. Especially the younger and less wealthy.

My point being: it seemed that Kelly understated just how much we like getting things for free.

 
slesslytall
 
Avatar
 
 
slesslytall
Total Posts:  1
Joined  12-07-2017
 
 
 
12 July 2017 00:53
 

Perhaps I am mistaken, but I seem to recall in this (or a similarly-themed) podcast, that the concept of the brain as a post hoc rationalisation mechanism was considered. Basically, this suggests that many (if not all) choices we make originate deep within parts of the brain not associated with conscious thought, and only after they are formed become considered by a consciousness mechanism.

What is the scientific/neurological evidence for such a theory, or is it just that—merely a theory?

Any links to reputable (or even scientific) articles on this would be greatly appreciated. Thanks!