#116- Racing Toward the Brink A Conversation with Eliezer Yudkowsky

 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  6388
Joined  22-02-2005
 
 
 
07 February 2018 09:04
 

In this episode of the Waking Up podcast, Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the “alignment problem,” IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics.

#116- Racing Toward the Brink A Conversation with Eliezer Yudkowsky

This thread is for listeners’ comments.

 
 
SlackerInc
 
Avatar
 
 
SlackerInc
Total Posts:  11
Joined  23-04-2017
 
 
 
07 February 2018 16:50
 

The objection that a truly superintelligent AI would not have the desire to do “stupid” things, like maximize paper clips at the cost of all life everywhere, seems plausible at first glance.  But I think I have a good analogue to show the flaw in this assumption: the sex drive, particularly the male sex drive.

Just look at Bill Clinton.  The man is, whatever you think of him, clearly very intelligent and articulate.  But he also clearly has a strong drive to do something that objectively seems “stupid”: to get his genitals stimulated by as many young women as possible, regardless of how risky it is to all his other goals.  This is a powerful drive preloaded in us long before we evolved to be smarter than chimpanzees, and in most men it stubbornly survives any rational thoughts that might pop up about how silly and base it all is.  And I would hypothesize that a superintelligent man surrounded by women of normal intelligence would have even more impulse to use his intellectual skills to get sex from women and to minimize the blowback he gets for doing so, because both would be easier for him.

Looked at this way, I think it’s a lot easier to imagine an AI having a fundamental instrumental goal which no increase in intelligence erodes, which leads the AI to use all its intellectual skills to achieve regardless of how silly it might strike the more rational corners of its mind.  Unless, that is, an AIs ability to rewrite itself also allowed it to remove such “base instincts”, in which case it’s ability to improve itself might save us rather than doom us.