In this episode of the Waking Up podcast, Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the “alignment problem,” IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics.
This thread is for listeners’ comments.
The objection that a truly superintelligent AI would not have the desire to do “stupid” things, like maximize paper clips at the cost of all life everywhere, seems plausible at first glance. But I think I have a good analogue to show the flaw in this assumption: the sex drive, particularly the male sex drive.
Just look at Bill Clinton. The man is, whatever you think of him, clearly very intelligent and articulate. But he also clearly has a strong drive to do something that objectively seems “stupid”: to get his genitals stimulated by as many young women as possible, regardless of how risky it is to all his other goals. This is a powerful drive preloaded in us long before we evolved to be smarter than chimpanzees, and in most men it stubbornly survives any rational thoughts that might pop up about how silly and base it all is. And I would hypothesize that a superintelligent man surrounded by women of normal intelligence would have even more impulse to use his intellectual skills to get sex from women and to minimize the blowback he gets for doing so, because both would be easier for him.
Looked at this way, I think it’s a lot easier to imagine an AI having a fundamental instrumental goal which no increase in intelligence erodes, which leads the AI to use all its intellectual skills to achieve regardless of how silly it might strike the more rational corners of its mind. Unless, that is, an AIs ability to rewrite itself also allowed it to remove such “base instincts”, in which case it’s ability to improve itself might save us rather than doom us.