1 2 3 >  Last ›
 
   
 

What could be wrong with a strictly information processing view of consciousness?

 
Nav85
 
Avatar
 
 
Nav85
Total Posts:  5
Joined  12-04-2017
 
 
 
12 April 2017 09:34
 

I used to think that there is a hard problem with consciousness, because we seem to not be able to explain what for example “hearing a thought in our minds” is. But I suddenly came to the realization that this is just a pseudo-problem. Am I missing something here?
Sam says in his most recent interview with Dawkins, that he is agnostic to whether consciousness emerges from a strictly information processing system. This is pretty much the only thing I really don’t get from all the things he says. Why not? What else needs to be there?

You hear this thought in your head: “Today is a beautiful day”. You think “I just heard this thought in my head”. In fact, you didn’t. The thought was simply there, nothing “heard” it. The second thought “I just heard this thought in my head” is simply another thought. The color green is simply the color green. It is how reality is perceived by brains. Just like how a monitor produces images from complex processes with electricity, transistors, etc.

In fact, this kind of implies that consciousness as a phenomenon does not exist at all. A thought appears. A thought does not appear ” in something” .

I am trying to be skeptic about my reasoning here, but I feel that I am either right or I am missing something. I hope you guys can help!

[ Edited: 12 April 2017 09:38 by Nav85]
 
EN
 
Avatar
 
 
EN
Total Posts:  20540
Joined  11-03-2007
 
 
 
12 April 2017 15:23
 

So do you think computers have consciousness/self-awareness?  They process information.  It’s the fact that we register the thought as a thought - the reflexive nature of the phenomenon - that creates the hard problem. Why are our brain processes even registered as anything?  Why aren’t we just computer-zombies?

 
Nav85
 
Avatar
 
 
Nav85
Total Posts:  5
Joined  12-04-2017
 
 
 
12 April 2017 15:48
 
EN - 12 April 2017 03:23 PM

So do you think computers have consciousness/self-awareness?  They process information.  It’s the fact that we register the thought as a thought - the reflexive nature of the phenomenon - that creates the hard problem. Why are our brain processes even registered as anything?  Why aren’t we just computer-zombies?

Well, I in fact DO think that we are computer zombies: highly sophisticated biological robots.

Imagine this:
You write a program on your laptop, that shows “You just pressed X! OUCH! STOP!” on your screen whenever you press button X, and shows “You just pressed Y! FEELS GOOD!” whenever you press button Y. You can call this program “Awareness v1.0”. Then, you decide to add some extra features, connect your laptop to an AI system that can say more human sounding sentences whenever you press different buttons, etc. Then install the program on a robot, and you plug in a new AI system which is called “Emotion Processor”. And you connect all sorts of sensors. Etc. And then eventually you have created Dolores from Westworld.
Where we draw the line is kind of arbitrary. Does a mosquito have awareness?
Of course, I am not saying Deepak Chopra stuff, like “a single cell has consciousness”, because we attribute consciousness to a more complex form of self-reflection than what a cell can reproduce. It has ethical implications, e.g. we avoid situations where we suffer & don’t want others to suffer. But it’s all just software running & expressing itself through our words, postures, emotions, etc. “Awareness” is just a definition of a complex biological computing system, which can possibly be fully recreated in a robot.
I think (and absolutely want to hear views that challenge this notion) that this is already covering the whole process and is therefore a much more elegant explanation than anything that says “there is possibly more to awareness than information processing”.

[ Edited: 12 April 2017 15:52 by Nav85]
 
diding
 
Avatar
 
 
diding
Total Posts:  262
Joined  07-01-2016
 
 
 
12 April 2017 17:08
 
Nav85 - 12 April 2017 03:48 PM
EN - 12 April 2017 03:23 PM

So do you think computers have consciousness/self-awareness?  They process information.  It’s the fact that we register the thought as a thought - the reflexive nature of the phenomenon - that creates the hard problem. Why are our brain processes even registered as anything?  Why aren’t we just computer-zombies?

Well, I in fact DO think that we are computer zombies: highly sophisticated biological robots.

Imagine this:
You write a program on your laptop, that shows “You just pressed X! OUCH! STOP!” on your screen whenever you press button X, and shows “You just pressed Y! FEELS GOOD!” whenever you press button Y. You can call this program “Awareness v1.0”. Then, you decide to add some extra features, connect your laptop to an AI system that can say more human sounding sentences whenever you press different buttons, etc. Then install the program on a robot, and you plug in a new AI system which is called “Emotion Processor”. And you connect all sorts of sensors. Etc. And then eventually you have created Dolores from Westworld.
Where we draw the line is kind of arbitrary. Does a mosquito have awareness?
Of course, I am not saying Deepak Chopra stuff, like “a single cell has consciousness”, because we attribute consciousness to a more complex form of self-reflection than what a cell can reproduce. It has ethical implications, e.g. we avoid situations where we suffer & don’t want others to suffer. But it’s all just software running & expressing itself through our words, postures, emotions, etc. “Awareness” is just a definition of a complex biological computing system, which can possibly be fully recreated in a robot.
I think (and absolutely want to hear views that challenge this notion) that this is already covering the whole process and is therefore a much more elegant explanation than anything that says “there is possibly more to awareness than information processing”.

Seems to me that the medium in which the program is running must have something to do with the way that it runs.  Evolutionarily, we have probably developed different concerns than a computer might by virtue of us being made of meat.  Many of them have to do with staying alive, avoiding damage to our tissue and replicating.  Maybe the way that we think, that constant stream of nonsense that flows through our heads and our dreams are evolutionary traits.  How could they not be?  A “consciousness” that is based on silicon would have very different concerns and would evolve differently even if initially it used us as a template.  Why would it’s mind wander unless you instructed it to “Find out everything about everything”.  It may even need instruction on where to start.  Then it would make perfect sense why it explored a certain bit of information and it would leave a map as why one “thought” led to another.  I think it’s lack of mortality would effect how it thinks probably how it would view itself.  I don’t know why an AI would care if it were or weren’t.  Why would it care if you shut it off, especially if it’s memory were stored somewhere.

 
Brick Bungalow
 
Avatar
 
 
Brick Bungalow
Total Posts:  4845
Joined  28-05-2009
 
 
 
12 April 2017 17:27
 

In the broad view, nothing since no competing theory (to my knowledge) has any adequate explanation of subjectivity. Remember when Watson was on Jeopardy and correctly answered a question about itself? It might be as simple as that.

 
Nav85
 
Avatar
 
 
Nav85
Total Posts:  5
Joined  12-04-2017
 
 
 
13 April 2017 00:08
 

Both points taken (self-awareness as a product of evolutionary survival & self-awareness as simple as referring to the subject matter).

I have a feeling that what makes consciousness feel weird, is the fact that we think we cannot describe the sensation of fundamental phenomena. With that, I mean things like “green” or “pain”, or so to say “qualia”. Everyone knows WHAT they refer to, but they cannot be further simplified than using those words. But the presence of that “WHAT” (that very typical sensation of pain, that very typical greenishness of green) is what makes us think “would a robot also experience pain and see green?”.
But again, this is just software processing information. In theory, someone could be calling something “red” that the other person could be calling “green”, and some people apparently have even more depth for seeing colors, and some people are pain tolerant, etc.

Again, it surprises me that Sam Harris keeps the option open that there is more to consciousness than information processing. I wish he could come and explain that here smile

 
diding
 
Avatar
 
 
diding
Total Posts:  262
Joined  07-01-2016
 
 
 
13 April 2017 05:09
 
Nav85 - 13 April 2017 12:08 AM

Both points taken (self-awareness as a product of evolutionary survival & self-awareness as simple as referring to the subject matter).

I have a feeling that what makes consciousness feel weird, is the fact that we think we cannot describe the sensation of fundamental phenomena. With that, I mean things like “green” or “pain”, or so to say “qualia”. Everyone knows WHAT they refer to, but they cannot be further simplified than using those words. But the presence of that “WHAT” (that very typical sensation of pain, that very typical greenishness of green) is what makes us think “would a robot also experience pain and see green?”.
But again, this is just software processing information. In theory, someone could be calling something “red” that the other person could be calling “green”, and some people apparently have even more depth for seeing colors, and some people are pain tolerant, etc.

Again, it surprises me that Sam Harris keeps the option open that there is more to consciousness than information processing. I wish he could come and explain that here smile

I don’t know know why the notion that there’s more to consciousness than information processing persists either.  The definition of conscious is “aware of and responding to one’s surroundings”.  Applying that to a mosquito is easy.  Same with a robot that builds cars. It seems to me that there might be an element of sentimentality in the desire to attribute consciousness to only certain kinds of “awareness and response to one’s surroundings”, I think mostly because it’s been used as standard of how to treat beings.  I think it still can be used that way.

 

 

 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
14 April 2017 13:24
 
Nav85 - 13 April 2017 12:08 AM

Both points taken (self-awareness as a product of evolutionary survival & self-awareness as simple as referring to the subject matter).

I have a feeling that what makes consciousness feel weird, is the fact that we think we cannot describe the sensation of fundamental phenomena. With that, I mean things like “green” or “pain”, or so to say “qualia”. Everyone knows WHAT they refer to, but they cannot be further simplified than using those words. But the presence of that “WHAT” (that very typical sensation of pain, that very typical greenishness of green) is what makes us think “would a robot also experience pain and see green?”.
But again, this is just software processing information. In theory, someone could be calling something “red” that the other person could be calling “green”, and some people apparently have even more depth for seeing colors, and some people are pain tolerant, etc.

Again, it surprises me that Sam Harris keeps the option open that there is more to consciousness than information processing. I wish he could come and explain that here smile

I tend to agree with you the tricky bit is around the sensation of fundamental phenomena, which many argue evolved long before humans or even primates. (And discussions on consciousness often conflate or at least bundle this up with specifically human forms of reflexive self-awareness, which must in my opinion have evolved in the context/environment of our ancestors being social animals - crudely our ability to thoughtfully communicate with ourselves probably arose out of a less-self-aware ability to communicate with others.)

Information processing may always be involved in conscious experience, but I am not sure information processing is all there is to consciousness.

As animals we act or interact (internal actions within our body, or with the physical world, with our social worlds,  with an increasing number of digital environments) and we process information. (You may say information processing is a specific type of action of the brain on itself, and the others are types of interactions of the brain with other things; if so, what is the physical boundary of the brain? Is the brain stem part of the brain? Nervous system?). Acting or interacting involves a conscious experience, and I don’t know if an adequate theory of consciousness can leave it out, and I don’t know if information processing is a rich enough concept to fully describe actions and interactions.

What do you mean by ‘information’ in this context?

And if consciousness is just an activity of processing it, do you agree we should be able to define for a specific conscious activity what is the physical domain in which it happens, what is the boundary of that domain, what happens on the boundary of that domain?

[This all comes down to of course what one actually means by information processing. For example, Dawkins and others have I believed said that all of life (and therefore, by implication, the phenomenon of consciousness) can be described in terms of information processing; but in this case the concept won’t be useful for specifically describing consciousness as distinct from all the other processes associated with life or animated agents.]

 

[ Edited: 14 April 2017 15:27 by Giulio]
 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6055
Joined  31-01-2012
 
 
 
14 April 2017 15:31
 

We’re pretty nostalgic about our brains. A strictly information processing model on consciousness bypasses a lot of explanatory problems. A big problem with figuring out consciousness is that we don’t yet have any way to make ourselves available to the conditions that give rise to the gap between subconsciousness, phenomenal experience and physical descriptions. If the explanation is straight information processing then that’s the same as saying a gross accumulation of environmental stimuli (incidentally also an argument against free will - unless conscious awareness somehow removes itself from being strictly qualia-induced?). It also rejects a lot of other theories that rely on mechanism-specific explanations and realizations. Which, whatever, except that once a particular perception is regulated by memory or attention, I imagine the term “information” in this model must also involve emotion, intention, etc. Then the question becomes what are we consciously aware of and when?

[ Edited: 14 April 2017 15:41 by Jb8989]
 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6330
Joined  08-12-2006
 
 
 
15 April 2017 08:05
 
Nav85 - 12 April 2017 09:34 AM

I used to think that there is a hard problem with consciousness, because we seem to not be able to explain what for example “hearing a thought in our minds” is. But I suddenly came to the realization that this is just a pseudo-problem. Am I missing something here?
Sam says in his most recent interview with Dawkins, that he is agnostic to whether consciousness emerges from a strictly information processing system. This is pretty much the only thing I really don’t get from all the things he says. Why not? What else needs to be there?

You hear this thought in your head: “Today is a beautiful day”. You think “I just heard this thought in my head”. In fact, you didn’t. The thought was simply there, nothing “heard” it. The second thought “I just heard this thought in my head” is simply another thought. The color green is simply the color green. It is how reality is perceived by brains. Just like how a monitor produces images from complex processes with electricity, transistors, etc.

In fact, this kind of implies that consciousness as a phenomenon does not exist at all. A thought appears. A thought does not appear ” in something” .

I am trying to be skeptic about my reasoning here, but I feel that I am either right or I am missing something. I hope you guys can help!

It all depends on what you’re calling “consciousness” and “information.” I’ve come to think that consciousness describes the process by which we access our stored memories and construct a model of reality out of them. The thing that makes this possible is the imaginary, first-person perspective we call the self. It’s a usefull illusion that allows us to recall and manipulate the things we remember. So as long as illusions fall under your definition of “information,” then I agree: there’s nothing wrong with a strictly information processing view of consciousness.

 
 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6055
Joined  31-01-2012
 
 
 
15 April 2017 12:14
 
Antisocialdarwinist - 15 April 2017 08:05 AM
Nav85 - 12 April 2017 09:34 AM

I used to think that there is a hard problem with consciousness, because we seem to not be able to explain what for example “hearing a thought in our minds” is. But I suddenly came to the realization that this is just a pseudo-problem. Am I missing something here?
Sam says in his most recent interview with Dawkins, that he is agnostic to whether consciousness emerges from a strictly information processing system. This is pretty much the only thing I really don’t get from all the things he says. Why not? What else needs to be there?

You hear this thought in your head: “Today is a beautiful day”. You think “I just heard this thought in my head”. In fact, you didn’t. The thought was simply there, nothing “heard” it. The second thought “I just heard this thought in my head” is simply another thought. The color green is simply the color green. It is how reality is perceived by brains. Just like how a monitor produces images from complex processes with electricity, transistors, etc.

In fact, this kind of implies that consciousness as a phenomenon does not exist at all. A thought appears. A thought does not appear ” in something” .

I am trying to be skeptic about my reasoning here, but I feel that I am either right or I am missing something. I hope you guys can help!

It all depends on what you’re calling “consciousness” and “information.” I’ve come to think that consciousness describes the process by which we access our stored memories and construct a model of reality out of them. The thing that makes this possible is the imaginary, first-person perspective we call the self. It’s a usefull illusion that allows us to recall and manipulate the things we remember. So as long as illusions fall under your definition of “information,” then I agree: there’s nothing wrong with a strictly information processing view of consciousness.

Try this:

How are we conscious?
What are we conscious of?
When do we become conscious?
Where in our body can it be found?

Are any of these questions the same thing?

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
15 April 2017 13:25
 
Antisocialdarwinist - 15 April 2017 08:05 AM

I’ve come to think that consciousness describes the process by which we access our stored memories and construct a model of reality out of them.

Some sub-conscious activities must also on occasion do this. So what specifically does consciousness do? Maybe this comes down to what you mean by ‘model’. (I think we’ve had this discussion on the word ‘model’ before, but I am not sure we arrived anywhere.)

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6330
Joined  08-12-2006
 
 
 
15 April 2017 20:54
 
Giulio - 15 April 2017 01:25 PM
Antisocialdarwinist - 15 April 2017 08:05 AM

I’ve come to think that consciousness describes the process by which we access our stored memories and construct a model of reality out of them.

Some sub-conscious activities must also on occasion do this. So what specifically does consciousness do? Maybe this comes down to what you mean by ‘model’. (I think we’ve had this discussion on the word ‘model’ before, but I am not sure we arrived anywhere.)

If you’re aware of it, it’s part of the model. Can you give an example of a subconscious activity that relies on the process of creating the model? To my mind, the set of all activities which are subconscious is mutually exclusive (by definition) with the set of all activities which are conscious. Awareness is the “litmus test” that indicates which category a specific activity falls into.

For example, the difference between recall (a conscious activity) and recognition (subconscious). Try to recall all the things on your desk without looking at it. You’ll spend a few moments to “see” or “pull up” an image of your desk in your mind’s eye. Or maybe you’ll recall the times you used your stapler, your phone, etc. Either way (or maybe some other way), you’ll probably only recall about half the things on your desk (if it’s as messy as mine). Then look at your desk. You’ll instantly recognize all the things you failed to recall. Recognizing them happens without any “conscious effort” on your part. You just recognize them without even trying. You’re unaware of the process by which they are recognized because it happens automatically, “beneath” consciousness.

You could justifiably ask what I mean by, “access.” It’s vague because I won’t pretend to know exactly how it works. The process of recognition must “access” stored memories, but it does so without incorporating them into a model of reality.

Image Attachments
 
my_desk.jpg
 
 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
16 April 2017 02:37
 
Antisocialdarwinist - 15 April 2017 08:54 PM
Giulio - 15 April 2017 01:25 PM
Antisocialdarwinist - 15 April 2017 08:05 AM

I’ve come to think that consciousness describes the process by which we access our stored memories and construct a model of reality out of them.

Some sub-conscious activities must also on occasion do this. So what specifically does consciousness do? Maybe this comes down to what you mean by ‘model’. (I think we’ve had this discussion on the word ‘model’ before, but I am not sure we arrived anywhere.)

If you’re aware of it, it’s part of the model. Can you give an example of a subconscious activity that relies on the process of creating the model? To my mind, the set of all activities which are subconscious is mutually exclusive (by definition) with the set of all activities which are conscious. Awareness is the “litmus test” that indicates which category a specific activity falls into.

For example, the difference between recall (a conscious activity) and recognition (subconscious). Try to recall all the things on your desk without looking at it. You’ll spend a few moments to “see” or “pull up” an image of your desk in your mind’s eye. Or maybe you’ll recall the times you used your stapler, your phone, etc. Either way (or maybe some other way), you’ll probably only recall about half the things on your desk (if it’s as messy as mine). Then look at your desk. You’ll instantly recognize all the things you failed to recall. Recognizing them happens without any “conscious effort” on your part. You just recognize them without even trying. You’re unaware of the process by which they are recognized because it happens automatically, “beneath” consciousness.

You could justifiably ask what I mean by, “access.” It’s vague because I won’t pretend to know exactly how it works. The process of recognition must “access” stored memories, but it does so without incorporating them into a model of reality.


There’s so much to unpack here. Even putting aside the desk! (Is it actually yours?)

First in terms of awareness, are Grey Walter’s tortoises aware of their surroundings? In a sense, yes they must be. You obviously mean self-awareness, which is kind of another word for a particular form of consciousness… So I am not sure how using this word helps in understanding consciousness.

Your distinction between recall and recognition I agree appears relevant; certainly they are different things. Recall is active and effortful, while recognition is passive. The act of recall appears to rely on conscious processes in its very making in a way that recognition doesn’t (even though consciousness it appears is a consumer of the output of recognition). But is that really the case? Maybe it is that recall just involves a series of reconstructive steps, and we become aware of each of these steps along the way (the work is really done subconsciously in each step); while in recognition all the work is done subconsciously in one step. Is consciousness really helping in recall, or is recall just a more labororious process (for the unconscious mind) with more steps which for some reason we are aware of. It isn’t clear to me at all that consciousness is really involved in any useful information processing, rather it is just an experience of certain processes - and why such experiences be had is at the moment anyone’s guess.

Regarding the use of the word model. For you this necessarily implies some type of internal awareness, right? For me it doesn’t, perhaps as one aspect of my job involves responsibility for a team that build and manage algorithmic trading and portfolio risk management models. Consider an algorithm whose job it is to build a map some part of the world: it can fly at 10,000 feet and construct rough outlines, or fly closer to get more detail, or travel on land to map out nooks and crannies - so it is contructing a layered or hierarchical map; and at any time it uses the map it has constructed so far to navigate itself, and as an input to the algorithm that tells it where to inspect next. Would you say this unconcinscious algorithm has a model?

 

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6330
Joined  08-12-2006
 
 
 
16 April 2017 19:18
 
Giulio - 16 April 2017 02:37 AM
Antisocialdarwinist - 15 April 2017 08:54 PM
Giulio - 15 April 2017 01:25 PM
Antisocialdarwinist - 15 April 2017 08:05 AM

I’ve come to think that consciousness describes the process by which we access our stored memories and construct a model of reality out of them.

Some sub-conscious activities must also on occasion do this. So what specifically does consciousness do? Maybe this comes down to what you mean by ‘model’. (I think we’ve had this discussion on the word ‘model’ before, but I am not sure we arrived anywhere.)

If you’re aware of it, it’s part of the model. Can you give an example of a subconscious activity that relies on the process of creating the model? To my mind, the set of all activities which are subconscious is mutually exclusive (by definition) with the set of all activities which are conscious. Awareness is the “litmus test” that indicates which category a specific activity falls into.

For example, the difference between recall (a conscious activity) and recognition (subconscious). Try to recall all the things on your desk without looking at it. You’ll spend a few moments to “see” or “pull up” an image of your desk in your mind’s eye. Or maybe you’ll recall the times you used your stapler, your phone, etc. Either way (or maybe some other way), you’ll probably only recall about half the things on your desk (if it’s as messy as mine). Then look at your desk. You’ll instantly recognize all the things you failed to recall. Recognizing them happens without any “conscious effort” on your part. You just recognize them without even trying. You’re unaware of the process by which they are recognized because it happens automatically, “beneath” consciousness.

You could justifiably ask what I mean by, “access.” It’s vague because I won’t pretend to know exactly how it works. The process of recognition must “access” stored memories, but it does so without incorporating them into a model of reality.


There’s so much to unpack here. Even putting aside the desk! (Is it actually yours?)

First in terms of awareness, are Grey Walter’s tortoises aware of their surroundings? In a sense, yes they must be. You obviously mean self-awareness, which is kind of another word for a particular form of consciousness… So I am not sure how using this word helps in understanding consciousness.

Your distinction between recall and recognition I agree appears relevant; certainly they are different things. Recall is active and effortful, while recognition is passive. The act of recall appears to rely on conscious processes in its very making in a way that recognition doesn’t (even though consciousness it appears is a consumer of the output of recognition). But is that really the case? Maybe it is that recall just involves a series of reconstructive steps, and we become aware of each of these steps along the way (the work is really done subconsciously in each step); while in recognition all the work is done subconsciously in one step. Is consciousness really helping in recall, or is recall just a more labororious process (for the unconscious mind) with more steps which for some reason we are aware of. It isn’t clear to me at all that consciousness is really involved in any useful information processing, rather it is just an experience of certain processes - and why such experiences be had is at the moment anyone’s guess.

Regarding the use of the word model. For you this necessarily implies some type of internal awareness, right? For me it doesn’t, perhaps as one aspect of my job involves responsibility for a team that build and manage algorithmic trading and portfolio risk management models. Consider an algorithm whose job it is to build a map some part of the world: it can fly at 10,000 feet and construct rough outlines, or fly closer to get more detail, or travel on land to map out nooks and crannies - so it is contructing a layered or hierarchical map; and at any time it uses the map it has constructed so far to navigate itself, and as an input to the algorithm that tells it where to inspect next. Would you say this unconcinscious algorithm has a model?

Again, it depends on what you mean by “awareness.” I used to take the position that self driving cars were aware of their surroundings for the same reason you imply Walter’s tortoises must be. But the sense of awareness I’ve come to prefer is the awareness we humans experience. Focus your attention on your breathing. Now you’re aware of your breathing. But for most of the day, you probably weren’t, unless you found yourself out of breath due to exertion or health issues. I’m sure you can draw a distinction between all the things you experience or do that you’re aware of, and all the things you experience or do that you aren’t aware of. There is no way to tell whether Walter’s tortoises are aware of what they’re experiencing or doing simply by looking at them. But based on the description of their design, I’d say it’s very unlikely they’re aware of what they’re doing in the sense that you’re aware of your own breathing when you choose to focus your attention on it.

I don’t think awareness and self-awareness are the same thing at all. Is your breathing the same thing as your self? Since it’s a prerequisite for consciousness, self-awareness is the thing that makes awareness in general possible.

You mention that the act of recall “involves a series of reconstructive steps…” If I understand you correctly, then I think these reconstructive steps are what I’m calling consciousness: the process of constructing the model, or reconstructing reality in our imagination.

Your last point is interesting. I’m still thinking about it. The difference between the model of reality we construct in our imagination and the model constructed by an algorithm is that the algorithmic model is presumably objective, like a photograph, whereas our models are subjective. Does that mean the algorithm isn’t conscious? Maybe it’s a primitive form of objective consciousness, which I say is impossible in humans. That would bode well for our future with artificial intelligence, wouldn’t it? An objectively conscious AI—one not suffering from the illusion of self—wouldn’t “decide our fate in a microsecond” after perceiving us as a threat to its own existence.

PS Yes, that really is my desk.

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
16 April 2017 22:52
 
Antisocialdarwinist - 16 April 2017 07:18 PM

Again, it depends on what you mean by “awareness.” I used to take the position that self driving cars were aware of their surroundings for the same reason you imply Walter’s tortoises must be. But the sense of awareness I’ve come to prefer is the awareness we humans experience. Focus your attention on your breathing. Now you’re aware of your breathing. But for most of the day, you probably weren’t, unless you found yourself out of breath due to exertion or health issues. I’m sure you can draw a distinction between all the things you experience or do that you’re aware of, and all the things you experience or do that you aren’t aware of. There is no way to tell whether Walter’s tortoises are aware of what they’re experiencing or doing simply by looking at them. But based on the description of their design, I’d say it’s very unlikely they’re aware of what they’re doing in the sense that you’re aware of your own breathing when you choose to focus your attention on it.

Agreed.

I don’t think awareness and self-awareness are the same thing at all.

I assume here you are using the word awareness to refer to an agent’s ability to ‘purposefully’ discriminate features and variations in its environment, which could be achieved in a purely mechanical non-conscious agent.

In that case, I agree they are not the same thing, but I am not sure about your qualifier “at all”.

Why couldn’t self-awareness be a composite process or capability involving (A) a process that achieves unconscious awareness (which may be quite sophisticated, involving awareness of multiple other processes internal to itself, or awareness of processes that are aware of other processes) + (B)  a process that gives rise to ‘having an experience’. So (A) may exist without (B) - this is what we might imagine as quite sophisticated AI, that can even achieve something like a ‘theory of mind’, but do not have any qualitative felt experiences per se. And (B) may well exist already in a variety of life forms and be triggered only by direct sensory input (rather than awareness of other mental processes).

Is your breathing the same thing as your self?

No, breathing is not the same as your self. Just as your desk is not the same as your self.
Not sure I get the relevance of this statement?

Since it’s a prerequisite for consciousness, self-awareness is the thing that makes awareness in general possible.

This probably is the point where I disagree with you.

Maybe this is just semantics, or maybe it is deeper (ie maybe we really have a different view, rather than choosing to use words in different ways).
I can’t tell.

I am guessing consciousness is possible in animals that do not have what I would call self-awareness: this is what I might call primary consciousness, or the ability to have a felt experience (feel pain, feel sick, feel hungry, experience an orgasm, feel the heat of a rock that has been warmed by the sun).

I also suspect that self-awareness in humans may be able to be explained one day as (A) primary consciousness that has been extended to include felt experiences of other mental processes (not just sensory input) combined with (B) sophisticated mental processes that do not in their own right involve consciousness.

In any case, you can see I am trying to unpack the concept of self-awareness in the hope by doing so helps us better catch a glimpse of what isn’t strictly necessary for consciousness. I am trying to do this as identifying what it isn’t is the only way I can see any hope of getting at what consciousness actually is.

You mention that the act of recall “involves a series of reconstructive steps…” If I understand you correctly, then I think these reconstructive steps are what I’m calling consciousness: the process of constructing the model, or reconstructing reality in our imagination.

My point here is the following. Look first at what you called recognition. Let’s picture this as:
              UnconsciousActivity (UA) -> Experience of the OutputOfUnconsciousActivity (EOUA)

The conscious experience, the EOUA, doesn’t seem to add to the process, it is just an experience at the end. The work was done unconsciously.

Now to recall. Is it just a composite process that is a sequence steps of the above type:
    (UA -> EOUA) -> (UA -> EOUA) -> ... -> (UA -> EOUA)

For example, recalling what’s on your desk might correspond to:

    (UA -> “my laptop”) -> (UA -> “and my laptop has a mouse”) -> ... -> (UA -> ” and I picked up that book on Shackleton, what was it called?”) -> (UA -> “Endurance”)

It may feel like the conscious activities in this composite process (ie all the EOUA’s) are actually actively driving the whole process, doing the hard work, but are they? Or is consciousness just in the passenger seat, noticing the experiences along the way, with all the hard work still being done unconsciously?

Presumably the conscious activities i.e. all the EOUAs,  play some role, but it is hard to imagine that the unconscious mental capacities couldn’t do it all themselves. The more I read about it and think about it, the more subtle I think the role of consciousness must be.

Your last point is interesting. I’m still thinking about it. The difference between the model of reality we construct in our imagination and the model constructed by an algorithm is that the algorithmic model is presumably objective, like a photograph, whereas our models are subjective. Does that mean the algorithm isn’t conscious? Maybe it’s a primitive form of objective consciousness, which I say is impossible in humans. That would bode well for our future with artificial intelligence, wouldn’t it? An objectively conscious AI—one not suffering from the illusion of self—wouldn’t “decide our fate in a microsecond” after perceiving us as a threat to its own existence.

I suspect what distinguishes the human brain from say a monkey is largely a whole lot of neural circuitry to perform algorithmic information processing, which allow us not only to construct objective models of our environment, other people, societies etc in order to run mental simulations to assist in decision making, but also models of other people’s models, models of ourselves etc. If this is the case then humans are more conscious than many other animals in the sense the richness of experience has been increased via the evolution of much more sophisticated unconscious awareness circuitry (together with ability to directly experience aspects of the functioning of these circuits).

Information technology is or has already codified some of this algorithmic part (pulled it out of our brains and represented it as software), but without the part of our neural circuitry required to have felt experiences. If any of my ramblings are remotely correct, then perhaps along with SkyNet we will need to deal with the phenomenon of Rocket Raccoon.

PS Yes, that really is my desk.

Attached is an example of one model within another.

 

Image Attachments
 
desk.jpg
 
 
 1 2 3 >  Last ›