#255- The Future of Intelligence A Conversation with Jeff Hawkins

 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  6931
Joined  22-02-2005
 
 
 
09 July 2021 16:31
 

In this episode of the podcast, Sam Harris speaks with Jeff Hawkins about the nature of intelligence. They discuss how the neocortex creates models of the world, the role of prediction in sensory-motor experience, cortical columns, reference frames, thought as movement in conceptual space, the future of artificial intelligence, AI risk, the “alignment problem,” the distinction between reason and emotion, the “illusory truth effect,” bad outcomes vs existential risk, and other topics.

#255- The Future of Intelligence A Conversation with Jeff Hawkins


This thread is for listeners’ comments

 
 
PermieMan
 
Avatar
 
 
PermieMan
Total Posts:  171
Joined  08-12-2019
 
 
 
10 July 2021 20:29
 

This was an awesome episode.

[ Edited: 02 August 2021 15:28 by PermieMan]
 
 
LadyJane
 
Avatar
 
 
LadyJane
Total Posts:  4085
Joined  26-03-2013
 
 
 
11 July 2021 13:15
 
 
 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  6931
Joined  22-02-2005
 
 
 
17 July 2021 05:35
 

I side with Mr. Hawkins on the potential future threat of AI.

The one thing AI can do that we cannot is to think things through to completion as a continuous task. The thinking stops when the assigned task arrives at a conclusion that satisfies all the assigned parameters. The machine has a single perception of the task.

Human thinking stops the task when the human is happy with the result, or when it conforms to its fantasy perceptions.

In order for AI to pose the threat The Boss worries about, there needs to be a second machine whose only input is a partial and limited perception of the unfolding operation of the first machine. It too will build a model of its reality but unlike a finger tracing a coffee cup in a material world governed by physical laws, this model is unrestrained and need not conform to the outside world or the first machine’s ability to accurately (with limits) perceive it.

The second machine can interrupt and derail the first machine whenever its operation is seen to conform to or satisfy the fantasy model that it has pieced together from bits of the first machine’s model. This would make for a bioon AI that would definitely be something to worry about.

 

 
 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  2210
Joined  31-10-2015
 
 
 
17 July 2021 08:39
 
Nhoj Morley - 17 July 2021 05:35 AM

I side with Mr. Hawkins on the potential future threat of AI.

The one thing AI can do that we cannot is to think things through to completion as a continuous task. The thinking stops when the assigned task arrives at a conclusion that satisfies all the assigned parameters. The machine has a single perception of the task.

Human thinking stops the task when the human is happy with the result, or when it conforms to its fantasy perceptions.

In order for AI to pose the threat The Boss worries about, there needs to be a second machine whose only input is a partial and limited perception of the unfolding operation of the first machine. It too will build a model of its reality but unlike a finger tracing a coffee cup in a material world governed by physical laws, this model is unrestrained and need not conform to the outside world or the first machine’s ability to accurately (with limits) perceive it.

The second machine can interrupt and derail the first machine whenever its operation is seen to conform to or satisfy the fantasy model that it has pieced together from bits of the first machine’s model. This would make for a bioon AI that would definitely be something to worry about.

 

Would “emotion” be a better word choice than “fantasy model”? Just curious. Fascinating stuff, Nhoj.

 

 
 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  6931
Joined  22-02-2005
 
 
 
17 July 2021 17:42
 
nonverbal - 17 July 2021 08:39 AM

Would “emotion” be a better word choice than “fantasy model”? Just curious. Fascinating stuff, Nhoj.

No. The only bit that would be analogous to emotions is the first machine’s design and programming which motivate it for the assigned task. It might have happy tingles when it’s done and cools off.

Making the second machine happy is a matter of making rhythmic symmetry and order with its perceived components fitting into short patterns that reduce the complexity of the assigned task and misdirect the first machine into believing it has accomplished more than it actually has. The second machine steers the first towards completing the task prematurely with happy tingles that are undeserved.

The process and the task are corrupted. So, don’t build an intelligence like that. We’re a bad example.

 
 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  2210
Joined  31-10-2015
 
 
 
18 July 2021 07:55
 

Strong emotion causes human problems severe enough that we sometimes tend to think of emotions as being entirely problematic. But when considering implementation of AI, what about utilizing the prospect of emotional connections that are so small and normal-feeling that we (humans) rarely even notice them?

Emotion obviously plays an important survival role in all mammals, birds, reptiles, and probably other critters, as it allows us to create shortcut programming. An emotion is a macro, containing potentially large fields of information.

Applying “emotion” to AI entities is perhaps an unfortunate word choice due to how people tend to view emotional impact, but their macro aspects might potentially save lots of “cognitive” effort. Healthy emotional function is still sometimes found in today’s world. Just not often.