‹ First  < 4 5 6
 
   
 

What could be wrong with a strictly information processing view of consciousness?

 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
07 October 2017 01:41
 
Antisocialdarwinist - 06 October 2017 08:41 PM

That’s an interesting example. My first thought is that consciousness as I’m defining it includes not just the model of reality, but the model of “self” that inhabits or observes it. It’s the model of “self” that brings subjectivity into the model, which is both a blessing and a curse. A curse because it renders us incapable of “seeing” reality objectively; a blessing because it allows us to imagine a version of reality that is analogous—but not identical—to reality itself.

To me, this text is just saying that consciousness involves consciousness of an external world together with consciousness of self, and that consciousness involves subjectivity. I am struggling to get more out of this than that. What am I missing?

We can imagine more than what “is.” We can imagine what’s possible.

If I remove the word “imagine”, or at least any form of consciousness that is necessarily implied by that term, I’d respond that machine learning algorithms can do this. Eg algorithms that perform inference and learning in Bayesian networks.

It does seem you are begging the question here. Would you agree? Ie to define consciousness, you keep coming back to terms that themselves are synonymous with consciousness, or conscious experience, like ‘subjectivity’ or ‘imagine’.

I have warmed to your ‘definition’ of consciousness as process that constructs a model of (a) an external reality, (b) of ourselves, (c) of ourselves in relation to the external reality, and (d) as observers of this model.

My claim is that modern day algorithms could be said to achieve (a) to (c), but I am trying to get my head around (d); ie define it in a way that isn’t tantamount to defining it in terms of consciousness.

Based on my understanding of the poker playing machine you described, there are two models. The first is “models of its opponents based on their current and historical actions.” That sounds to me like an objective record of past events with no room for subjective speculation on the machine’s part. When Player A is faced with scenario X, he most often does Z. The proper response by the machine to Z is z.

Not exactly. Well I don’t know what you mean by using the word subjective. Again, are you begging the question? The hypothetical algorithm is conjecturing stylised models of the strategies other players are using, updating their structure and probabilities based on how they play, perhaps performing experiments to tease out aspects of their playing styles (so the algorithm will be trading off exploiting   the current hand versus exploring the behaviours of others).

The second model involves “estimating how its opponents are seeing itself as player,” or, “a model of how other agents are creating a model of itself.” I’m having a hard time getting my head around that, but it almost sounds like the machine has a model of self? I’m not quite sure that’s what you mean, but if so, then it might qualify as consciousness.

In the same way the algorithm is forming and updating stylised models of other players, it assumes the other players are doing the same with itself.

So it is modelling how other players will see it and model it. This requires it maintaining a (stylised) model of self from the perspective of others. From which it can make inferences like: at this point given what they have seen, they may think I am a conservative player, so maybe now it is a good time to bluff.

So in making decisions, the algorithm will have to take into account how its actions may affect the model of itself other players are maintaining and updating.

So, sure it lacks subjectivity. But what I am looking for is a definition of the subjective aspect of the model that doesn’t just come down to using terms like subjective.

What is the fundamental qualitative difference between your model of self and the model you maintain of how others may see you?  And can you express this without using words that themselves entail consciousness?

If you don’t mean it has a model of self, then it seems to me that this second model would be “baked into” the first model and therefore superfluous. “A model of how other agents are creating a model of itself” sounds like a model of how other agents arrive at their strategy, without adding any more information about the strategy itself than is already known from the first model. If the player never folds when he has four of a kind, that’s all that matters. What difference does it make why he never folds on four of a kind?

The second model can be seen as an augmentation or enrichment of the first one. Like a Russian doll. Why? So it can make better predictions and so it can influence (mislead) other players to its benefit.

 

 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  30
Joined  01-10-2017
 
 
 
08 October 2017 01:54
 
EN - 03 October 2017 04:36 PM

The problem with descriptions of “no thought” consciousness is that if there was no thought, how are you capable of describing what you experienced?  Obviously there was memory going on, and obviously there was some registry of what was occurring - otherwise it would have been impossible to describe the experience.  Maybe there was no reason taking place, but clearly the brain was at work recording the event and forming a narrative about it, at least at some point.

I obviously memorised the episode, yes. Whether that in itself should be characterised as ‘thinking’ is debatable. Usually, we talk of thinking as a mental activity performed when we are awake. When asleep, we are supposed to dream, at best. Here, this was neither being awake nor being asleep, and I don’t think vocabulary and usage could possibly be settled when talking about that sort of mental state.

As to forming a narrative, no, not on the spot. I was able to form a narrative after the event, once I had come round and was therefore awake, and strictly on the basis of what I could recall.
EB

 
socratus
 
Avatar
 
 
socratus
Total Posts:  203
Joined  28-05-2015
 
 
 
09 October 2017 01:59
 

  Human brain works on two levels:
a) usually consciousness (logical) system  and
b)  rarely unconsciousness system which later changes as logical.
*
In his last autobiographic article, Einstein wrote:
” . . . the discovery is not the matter of logical thought,
even if the final product is connected with the logical form”

In the book ‘ The Holographic Universe’  Michael Talbot
on page 160 explained this situation in such way:
‘ Contrary to what everyone knows it is so, it may not be
the brain that produce consciousness, but rather consciousness
that creates the appearance of the brain ’
*
Evan Walker wrote:
“... indeed an understanding of psi phenomena and of
consciousness must provide the basis of an improved
understanding of quantum mechanics.”
===================================

 
 
Tahiti67
 
Avatar
 
 
Tahiti67
Total Posts:  66
Joined  13-09-2017
 
 
 
09 October 2017 10:13
 
socratus - 09 October 2017 01:59 AM

  Human brain works on two levels:
a) usually consciousness (logical) system  and
b)  rarely unconsciousness system which later changes as logical.
*
In his last autobiographic article, Einstein wrote:
” . . . the discovery is not the matter of logical thought,
even if the final product is connected with the logical form”

In the book ‘ The Holographic Universe’  Michael Talbot
on page 160 explained this situation in such way:
‘ Contrary to what everyone knows it is so, it may not be
the brain that produce consciousness, but rather consciousness
that creates the appearance of the brain ’

*
Evan Walker wrote:
“... indeed an understanding of psi phenomena and of
consciousness must provide the basis of an improved
understanding of quantum mechanics.”
===================================

That’s interesting. Does he explain it further?

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6330
Joined  08-12-2006
 
 
 
09 October 2017 21:53
 
Giulio - 07 October 2017 01:41 AM
Antisocialdarwinist - 06 October 2017 08:41 PM

That’s an interesting example. My first thought is that consciousness as I’m defining it includes not just the model of reality, but the model of “self” that inhabits or observes it. It’s the model of “self” that brings subjectivity into the model, which is both a blessing and a curse. A curse because it renders us incapable of “seeing” reality objectively; a blessing because it allows us to imagine a version of reality that is analogous—but not identical—to reality itself.

To me, this text is just saying that consciousness involves consciousness of an external world together with consciousness of self, and that consciousness involves subjectivity. I am struggling to get more out of this than that. What am I missing?

We can imagine more than what “is.” We can imagine what’s possible.

If I remove the word “imagine”, or at least any form of consciousness that is necessarily implied by that term, I’d respond that machine learning algorithms can do this. Eg algorithms that perform inference and learning in Bayesian networks.

It does seem you are begging the question here. Would you agree? Ie to define consciousness, you keep coming back to terms that themselves are synonymous with consciousness, or conscious experience, like ‘subjectivity’ or ‘imagine’.

I have warmed to your ‘definition’ of consciousness as process that constructs a model of (a) an external reality, (b) of ourselves, (c) of ourselves in relation to the external reality, and (d) as observers of this model.

My claim is that modern day algorithms could be said to achieve (a) to (c), but I am trying to get my head around (d); ie define it in a way that isn’t tantamount to defining it in terms of consciousness.

Based on my understanding of the poker playing machine you described, there are two models. The first is “models of its opponents based on their current and historical actions.” That sounds to me like an objective record of past events with no room for subjective speculation on the machine’s part. When Player A is faced with scenario X, he most often does Z. The proper response by the machine to Z is z.

Not exactly. Well I don’t know what you mean by using the word subjective. Again, are you begging the question? The hypothetical algorithm is conjecturing stylised models of the strategies other players are using, updating their structure and probabilities based on how they play, perhaps performing experiments to tease out aspects of their playing styles (so the algorithm will be trading off exploiting   the current hand versus exploring the behaviours of others).

The second model involves “estimating how its opponents are seeing itself as player,” or, “a model of how other agents are creating a model of itself.” I’m having a hard time getting my head around that, but it almost sounds like the machine has a model of self? I’m not quite sure that’s what you mean, but if so, then it might qualify as consciousness.

In the same way the algorithm is forming and updating stylised models of other players, it assumes the other players are doing the same with itself.

So it is modelling how other players will see it and model it. This requires it maintaining a (stylised) model of self from the perspective of others. From which it can make inferences like: at this point given what they have seen, they may think I am a conservative player, so maybe now it is a good time to bluff.

So in making decisions, the algorithm will have to take into account how its actions may affect the model of itself other players are maintaining and updating.

So, sure it lacks subjectivity. But what I am looking for is a definition of the subjective aspect of the model that doesn’t just come down to using terms like subjective.

What is the fundamental qualitative difference between your model of self and the model you maintain of how others may see you?  And can you express this without using words that themselves entail consciousness?

If you don’t mean it has a model of self, then it seems to me that this second model would be “baked into” the first model and therefore superfluous. “A model of how other agents are creating a model of itself” sounds like a model of how other agents arrive at their strategy, without adding any more information about the strategy itself than is already known from the first model. If the player never folds when he has four of a kind, that’s all that matters. What difference does it make why he never folds on four of a kind?

The second model can be seen as an augmentation or enrichment of the first one. Like a Russian doll. Why? So it can make better predictions and so it can influence (mislead) other players to its benefit.

 

Remember, consciousness according to my definition is a process, not a thing. That said, I’m still struggling with how the model of self fits in exactly. Is the model of self necessary for constructing the model? Or only for observing it? In other words, is the model of reality objective, and subjectivity introduced by the observing model of self? Or is the model of reality itself subjective? I don’t think it’s really all that important one way or the other—unless you start comparing it with machine learning. (And maybe not even then.)

Anyway, the difference I see between consciousness and the kinds of machine learning algorithms you’ve mentioned is subjectivity. Machines aren’t biased, unless they’re programmed to have a bias. They can’t develop a bias on their own. For that they’d need a model of self, which implies self-awareness.

I’m not sure that “imagine” is necessarily synonymous with consciousness. Imagination is a conscious act, like recall. But doesn’t it imply that the imaginer is deliberately adding things to the model of reality that are known to be different from reality? Imagine yourself in bed with Taylor Swift, for example. Now imagine yourself in your own bed with your wife. The former is imagination, the latter recall. Both require consciousness, but neither is synonymous with it.

I don’t think subjectivity is synonymous with consciousness, either. It describes consciousness in the same way “round” describes an apple, but you wouldn’t say that “round” is synonymous with “apple,” would you? “Well,” you say, “all consciousness is subjective, so doesn’t that make them synonymous?” All apples are round, but not all round things are apples. Are there other things (or processes) besides consciousness which are subjective?

What I mean by “no room for subjective speculation” goes back to the difference between recall and imagination. A machine is capable of recall, but not imagination. Your poker-playing machine could never imagine being in bed with Taylor Swift. It could, however, recall exactly how you played every single hand of poker you ever played against it. And thereby predict how you’re going to play this hand.

The model of self and the model (my model, I assume) of how others may see me are both illusions. They’re different illusions, but they’re both illusions. Does that make them fundamentally, qualitatively different? In the context of this discussion, probably not.

I’m at a loss as to what information “the second model” provides that’s not already included in the first. I think you’d have to provide more information on how the machine creates its “model of how other agents [me, in this case] are creating a model of itself.” Is it restricted to our poker games? That’s what I was assuming. If that’s the case, then its model of me is based 100% on our past poker engagements. Whatever information it could glean from the model could be gleaned directly, by recalling our past poker engagements. Couldn’t it?

On the other hand, if this poker playing machine is accompanying me on other activities outside of poker that might provide additional information about my personality, and which might help predict how I play poker, then that might be a different story. If I become more gullible when I get drunk, for example, the machine might want to take that into account when we’re playing poker. That might be something that could be incorporated into a model of my model of it and provide additional information not available from our past poker engagements.

 
 
Shaikh Raisuddin
 
Avatar
 
 
Shaikh Raisuddin
Total Posts:  81
Joined  20-03-2015
 
 
 
15 October 2017 07:49
 

Nav85,

The FIRST QUESTION is, “What is information?’”

 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
17 October 2017 12:42
 
Shaikh Raisuddin - 15 October 2017 07:49 AM

Nav85,

The FIRST QUESTION is, “What is information?’”

And what are examples of information processing in the natural world other than processes related to human or animal consciousness?

 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  30
Joined  01-10-2017
 
 
 
18 October 2017 09:44
 
Giulio - 17 October 2017 12:42 PM
Shaikh Raisuddin - 15 October 2017 07:49 AM

Nav85,

The FIRST QUESTION is, “What is information?’”

And what are examples of information processing in the natural world other than processes related to human or animal consciousness?

The genetic code. Living cells are able to process DNA (and probably other structures within the cell, I’m no specialist) to store, retrieve, and use information relevant to life.
Many people, including scientists, talk of the genetic code as information. They may think of it mostly as a metaphor but it must be possible to assess the quantity of information of the genetic code. And how much information you can store in a bit of DNA, or indeed in any physical structure, such as that of molecules.
The difficulty in the case of the genetic code, I think, is to find out exactly which part of the DNA structure is used to store the genetic code. I think scientists themselves still don’t know yet exactly where in the DNA (or elsewhere) is the complete genetic code of an organism stored, if there is to be such a thing.
EB

 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
18 October 2017 13:03
 
Speakpigeon - 18 October 2017 09:44 AM
Giulio - 17 October 2017 12:42 PM
Shaikh Raisuddin - 15 October 2017 07:49 AM

Nav85,

The FIRST QUESTION is, “What is information?’”

And what are examples of information processing in the natural world other than processes related to human or animal consciousness?

The genetic code. Living cells are able to process DNA (and probably other structures within the cell, I’m no specialist) to store, retrieve, and use information relevant to life.
Many people, including scientists, talk of the genetic code as information. They may think of it mostly as a metaphor but it must be possible to assess the quantity of information of the genetic code. And how much information you can store in a bit of DNA, or indeed in any physical structure, such as that of molecules.
The difficulty in the case of the genetic code, I think, is to find out exactly which part of the DNA structure is used to store the genetic code. I think scientists themselves still don’t know yet exactly where in the DNA (or elsewhere) is the complete genetic code of an organism stored, if there is to be such a thing.
EB

Have you read Dennis Bray’s book Wetware? I recommend it.  It goes into the inner workings of single cell organisms to describe how they process information so as to respond to their environment in an almost intelligent way. It is clear that more than sequences in RNA are important, but also the physical properties eg folding structures that are critical to binding processes, and of course the whole thing is stochastic, relying on the average of millions of individual interacting molecules. Very different from a Turing machine, where data (tape with symbols) and code (rules for manipulating symbols on the tape) are separated; in the physical world (wetware), even though data is stored in DNA, the ‘data and code’ that get operated on, are responsible for storing ‘memory’ of an environment, trigerring responses, seem to be entwined in processes and interactions.

Anyway, when thinking about these processes, or DNA specifically, what insight does it give us about what information is? What other concepts does information presuppose? Information I assume needs to be information about something? What is the information in DNA about? Does information presuppose a subject and an object or environemnt (X has information about Y)? Can you have information without a corresponding notion of action (X has information about Y that enables it to do Z)?

 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  30
Joined  01-10-2017
 
 
 
20 October 2017 01:21
 
Giulio - 18 October 2017 01:03 PM
Speakpigeon - 18 October 2017 09:44 AM
Giulio - 17 October 2017 12:42 PM
Shaikh Raisuddin - 15 October 2017 07:49 AM

Nav85,

The FIRST QUESTION is, “What is information?’”

And what are examples of information processing in the natural world other than processes related to human or animal consciousness?

The genetic code. Living cells are able to process DNA (and probably other structures within the cell, I’m no specialist) to store, retrieve, and use information relevant to life.
Many people, including scientists, talk of the genetic code as information. They may think of it mostly as a metaphor but it must be possible to assess the quantity of information of the genetic code. And how much information you can store in a bit of DNA, or indeed in any physical structure, such as that of molecules.
The difficulty in the case of the genetic code, I think, is to find out exactly which part of the DNA structure is used to store the genetic code. I think scientists themselves still don’t know yet exactly where in the DNA (or elsewhere) is the complete genetic code of an organism stored, if there is to be such a thing.
EB

Have you read Dennis Bray’s book Wetware? I recommend it.  It goes into the inner workings of single cell organisms to describe how they process information so as to respond to their environment in an almost intelligent way. It is clear that more than sequences in RNA are important, but also the physical properties eg folding structures that are critical to binding processes, and of course the whole thing is stochastic, relying on the average of millions of individual interacting molecules. Very different from a Turing machine, where data (tape with symbols) and code (rules for manipulating symbols on the tape) are separated; in the physical world (wetware), even though data is stored in DNA, the ‘data and code’ that get operated on, are responsible for storing ‘memory’ of an environment, trigerring responses, seem to be entwined in processes and interactions.

Anyway, when thinking about these processes, or DNA specifically, what insight does it give us about what information is? What other concepts does information presuppose? Information I assume needs to be information about something? What is the information in DNA about? Does information presuppose a subject and an object or environemnt (X has information about Y)? Can you have information without a corresponding notion of action (X has information about Y that enables it to do Z)?

My guess is that the scientific notion of information is a drastic reduction from the ordinary notion most people would have in mind.
People think of information as ‘content’, essentially because they are considering it from their own perspective, forgetting that they are actively decoding the information and they are giving it it’s content.
I think the scientific view is reduced to that of an abstract structure with a storage capability and storage capacity. A structure that has two states can store one bit of information. Three bits require eight possible states, etc. There’s no a priori content. It’s a capability. And then, content is ‘created’ through interaction with the structure. The cell interacts with its DNA and computers interact with their hard drives. People interact with their brain to get memories.
So, in effect, content is given just as much by the information store itself as it is by the thing in the environment of the store that somehow interpret the information. The same structure could contain different contents for different interpretative systems.
The same book is differently understood by different people. Same amount of information inside the book but different contents depending on who is doing the interpretation. And, information without content doesn’t look like information to us. I think it’s a case of scientists reshaping an ordinary notion into a new one and creating in the process some confusion among us ordinary folks.
And, consequently, the scientific notion of information is reduced to a quantity, i.e. storage capability. There is the same amount of information in a hard drive overwritten with only zeroes as in one containing the whole history of mankind.
EB

[ Edited: 20 October 2017 01:24 by Speakpigeon]
 
‹ First  < 4 5 6