In this episode of the Making Sense podcast, Sam Harris introduces John Brockman’s new anthology, “Possible Minds: 25 Ways of Looking at AI,” in conversation with three of its authors: George Dyson, Alison Gopnik, and Stuart Russell.
George Dyson is a historian of technology. He is also the author of Darwin Among the Machines and Turing’s Cathedral.
Alison Gopnik is a developmental psychologist at UC Berkeley and a leader in the field of children’s learning and development. Her books include The Philosophical Baby.
Stuart Russell is a Professor of Computer Science and Engineering at UC Berkeley. He is the author of (with Peter Norvig) of Artificial Intelligence: A Modern Approach, the most widely used textbook on AI.
This thread is for listeners’ comments.
I have some doubts about Sam’s, and others’, position that once we get human level AGI then essentially immediately we’ll get trans-human level intelligence.
Let’s take “human level” pretty literally even though we probably(?) will have something more like “idiot savant” level intelligence. It takes (natural) human level intelligences a lot of time and effort to do things like figuring out how to access and make sense of new data sources, integrate disparate software systems, figure out how to provision the hardware it runs on etc. At first, I figure we’re looking at onses twoses of prototype systems. Really, how quickly could a couple of human geniuses make a huge disruptive advance?
What was the reference to “50 lines of code that could destroy the EU or democracy” towards the end of the third interview all about? What did I miss?