1 2 3 >  Last ›
 
   
 

A naturalistic approach to consciousness

 
silava
 
Avatar
 
 
silava
Total Posts:  4
Joined  14-05-2020
 
 
 
20 May 2020 13:38
 

I can still remember when I read the two blog posts “The Mystery of Consciousness I and II” in 2011. It is a very fascinating topic which immediately got me hooked. However, after reading both blog entries, I was left very frustrated. For all discoveries that science has made in the past centuries, for the hard problem of consciousness there had been no visible progress.

How is it that unconscious events can give rise to consciousness? Not only do we have no idea, but it seems impossible to imagine what sort of idea could fit in the space provided. (Sam Harris,  The Mystery of Consciousness II)

In some subconscious areas of my brain these sentences started to circulate and never let go. Fast forward nine years and let me now try to explain a new approach, starting from the obviously unconscious and proceed via evolutionary processes to consciousness. Even though Sam is very pessimistic about this:

The problem, however, is that the distance between unconsciousness and consciousness must be traversed in a single stride, if traversed at all. (Sam Harris, The Mystery of Consciousness II)

First of all, I start with my assumptions, which I deem to be quite uncontroversial. They will be important during my argumentation:

1. I start from philosophical materialism. Only space, time, matter and energy exist (in a strictly physical sense).
2. It is too difficult for me to explain consciousness for an adult human brain. Therefore, I will try to explain the emergence of consciousness in the history of evolution.
3. Features of living beings need to provide an evolutionary benefit. Otherwise they would be eliminated.
4. The level of consciousness is on a continuum and therefore I assume consciousness evolved slowly and gradually.

For simplicity, I use the word consciousness as a synonym for self-awareness. This is only a rough explanation which cannot go into much detail.

Let me begin with the first organisms. They were probably quite simple and could only reproduce (and compared with modern life quite slowly and clumsily). I assume they even had no metabolism, because some supportive environment provided everything they needed. Early life can then be assumed to be purely organic machines, simply obeying the natural laws. While this is definitely a speculation from my side, it could be confirmed by lab experiments with simple artificial life. My point is that there is no need or justification to assume any consciousness for such simple organisms.

Things look more complex for e.g. modern bacteria. They can already possess some primitive sensing organs, e.g. to detect molecules and react by relocating their position or ingesting other bacteria. This kind of behaviour is effectively hard coded in their genes. Learning and memory happens on a purely genetic level, since they don’t have a nervous system. Again, this is speculation from my side. Nevertheless the behaviour of such organisms can still be explained with a purely mechanistic view and without them having any consciousness.

The next step is crucial. It is not about consciousness yet, it is about world models. This might sound like some digression first, but it is a very important concept to be able to explain consciousness later.

When progressing from single cell organisms to multicellular organisms, some of them develop neurons. The go to animal for scientists is Caenorhabditis elegans, a flatworm and one of the simplest organisms to have a nervous system. It has got exactly 302 neurons and scientists have mapped all their connections in a connectome.
So what is the evolutionary benefit for Caenorhabditis elegans of having these neurons? What do they do? The simplest answer is: A nervous system builds a world model.
At a conceptual level, the nervous system gets input from sensory organs (e.g. temperature, smells, brightness/vision, sounds, tactile input, pain, ...), processes that input and generates some output for adequate reactions. Whether a flatworm or a human, that concept is universal, but of course the world models differ vastly in detail.
Natural selection rewards more realistic world models and punishes errors in the world model. This creates a direct driving force to improve the world models up to some local optimum, depending on the species and its environment. As the word “model” implies, lots of unnecessary details are omitted.

For a nervous system, learning and memory is now dynamically possible. They do not equate with hard coded information in the genes any more. However, such world models are just some sort of abstraction. They do not exist in a physical sense. We cannot see them or verify their existence independently. Therefore, I am tempted to even think of Caenorhabditis elegans as an organic machine. World models help to understand their behaviour, but there is still no consciousness required.

From such a starting point evolution allows gradual increments to the complexity of world models. Some bigger animals can afford to sustain more neurons and they in turn help them to survive in more complex and dynamic environments.

The world models start to differentiate better between the inanimate environment, predators and prey. Better sensory organs allow to better identify risks and opportunities, in turn requiring more complex world models. This is no inevitable development, as shown by all the species without a nervous system. But in the animal kingdom a nervous system becomes the gold standard for adapting to dynamic environments and thus to survive as a species.

Slowly but constantly some new concepts were introduced into the world models of larger animals. Behaviour of predators and behaviour of prey can be anticipated. Then social animals also anticipate the behaviour of their own group, raising the complexity even further. Also the properties of the individual itself are considered more and more in the individual’s world model: What is my size, weight, dimensions, speed, power, endurance, ... In parallel to all of this, the individual’s world model is used for simulations. If there are several possible alternatives, what is their likely outcome? Simulations are risk free, and evolution takes care they are constantly honed with reality.

Especially social animals have a driving force for more complex world models. Should I cooperate in this situation? What will the others do next? What is my current status in the group? Who has helped me in the past? Etc. These are very high-level simulations in a nervous system. The world model has already modelled all kinds of inanimate artefacts, other organisms, animal behaviour and strategies which comprise the environment. And probably somewhere around this point, the nervous system slowly starts with the innovative next step:

The world model incorporates itself. Especially for social animals, there is not only friend #1, friend #2 and foe #3. There is also oneself. Each individual has got its own world knowledge, partly separate from the others. For the world model to be effective in such a hypercomplex environment, it needs to consider itself by modelling itself as part of the world model. This is a brand new concept, effectively a model of a model.

The world model realizes that it exists, that it is separate from the environment and it discovers in part its properties (e.g. that it has got agency and that it can simulate the outcome of alternatives). To rephrase Descartes: “I am thinking about myself, therefore I am.” Slowly, the world model becomes self-aware or conscious. As usual with evolution, that is probably some slow process with a slow progress over many many generations. The self-aware world model of a social species was now ready to use persuasion, bribe, retaliation, deception, lies and many other advanced concepts to influence other individuals.

Carried over to human brains, this all could now be summarized into a layered architecture:

1. The basis layer is the human brain with its firing neurons and synapses. Input is processed, the neuronal network gets updated and output generated. This is the hardware which can be studied in all technical detail.
2. The first abstraction layer is the world model each brain generates. It cannot be visualized or studied directly, because it does not exist in any physical manifestation. It might be compared to an operating system running on the brain hardware.
3. The second abstraction layer is the model of the world model, as created by the world model itself. This implies no endless recursion, just one abstraction upon another. Consciousness runs on the platform of the (necessary) world model. It is itself evolutionarily necessary to survive in a complex dynamic environment, but it also does not manifest in any physical form.

The human brain generates some kind of world of its own. It is strictly linked to the physical reality, but all our (per definition subjective) experiences are exclusively happening within this brain-generated world.

Probably my deliberations were too simplistic. In effect, I am arguing at least against Sam Harris and Yuval Noah Harari, because after the first formulation of my ideas I found this quote from Yuval where he ridicules something like my idea:

“As the brain tries to create a model of its own decisions, it gets trapped in an infinite digression, and abracadabra! Out of this loop, consciousness pops out.” (Yuval Noah Harari, Homo Deus, page 99).

Maybe I have hit some blind spot, but I still cannot see any error in my thinking. Therefore I am inviting feedback from this forum.

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3465
Joined  26-01-2010
 
 
 
21 May 2020 00:55
 

What you are missing is the “what does it feel like to be something” aspect of consciousness. That is hard to explain when starting from physicalist assumptions, as you do.

A more recent development is a revival of classical idealism, or at least getting close to it. Bernardo Kastrup is one who I’ve read who goes whole-hog for idealism, and doesn’t hesitate to call it idealism. Others have not gone quite so far, instead opting for some form of panpsychism.

A hint with regard to your theory is that the things you start with, atoms and forces and such, the subject matter of physics, are themselves objects in a world model. To get at reality prior to any world model is very difficult; you have to start with the basic stuff of experience, which are called qualia, and are entirely subjective. Once there is intersubjective agreement on qualia, then there is the start of both a theory of objective reality and a world model.

Purely biochemical interactions may be thought of as the beginnings of intersubjectivity and of world models, even if there is no nervous system, because they involve an adaptation to local physical conditions. If the interactions are persistent enough to form similar instances of interactions, i.e., primitive replication, then a primitive world model may be said to have emerged. Such a world model is not in declarative form, but in normative form, because it comprises a set of recipes (for want of a better term) for interactions that enable the object that counts those interactions as attributes to persist and replicate.

World models in a declarative sense cannot happen until there is an additional symbolic layer added to the interaction recipes. Think of this as a way of designating or referring to recipes and interactions that is distinct executing the recipes or making the interactions happen. This could mark the beginning of what some people define as consciousness.

 
 
burt
 
Avatar
 
 
burt
Total Posts:  16140
Joined  17-12-2006
 
 
 
21 May 2020 09:27
 
silava - 20 May 2020 01:38 PM

I
First of all, I start with my assumptions, which I deem to be quite uncontroversial. They will be important during my argumentation:

1. I start from philosophical materialism. Only space, time, matter and energy exist (in a strictly physical sense).
2. It is too difficult for me to explain consciousness for an adult human brain. Therefore, I will try to explain the emergence of consciousness in the history of evolution.
3. Features of living beings need to provide an evolutionary benefit. Otherwise they would be eliminated.
4. The level of consciousness is on a continuum and therefore I assume consciousness evolved slowly and gradually.

Poldano gave a good response, especially pointing out the qualia problem for any strictly materialist attempt to explain consciousness. I’ll second that, and say that my own preferred view is that consciousness must be assumed a priori (not human consciousness) in the same way that physicists assume space-time. Here, though, I’ll offer some comments on your assumptions.
1. Materialism has problems when asked “was is matter?” Eventually, if that question is pursued to the limit it ends up having to admit that we don’t really know, but it’s some form of energy. But is energy material? What about potential energy? And so on.
2. isn’t an assumption, it’s a program
3. This isn’t necessarily true (it is what’s called the adaptationist hypothesis). Sometimes a feature is a side-effect of some other feature. And some feature may be selected for but bring with it another aspect that is selected against. There are also spandrels, accidental features.
4. This is the gradualist approach to evolution. Other approaches seem more likely today: punctuated equilibrium, epochal evolution.

None of these comments go against what you’re trying to do, but they do point to the degree of complication involved. As I said, my view is that consciousness must be assumed a priori and the question reframed in terms of the evolution of nervous systems capable of supporting an internal world of the sort that we experience. That still has to deal with the question of qualia and the internal experiential nature of consciousness but I think it’s an easier way.

 
silava
 
Avatar
 
 
silava
Total Posts:  4
Joined  14-05-2020
 
 
 
23 May 2020 12:36
 

First of all, sorry for my individual use of words. I am lacking some philosophical or scientific background, therefore I used e.g. the word assumption quite freely.

Your feedback was very useful for me, much appreciated! I already heard about the qualia issue from Thomas Nagel’s “What is it like to be a bat?” paper, but I didn’t get quite the importance of it. It sounded like some follow up mystery to solve once the roots of consciousness have been discovered.

How about a short thought experiment on the qualia problem?

Imagine twins, genetically identical and with the same upbringing. You couldn’t distinguish them. There is only one difference between them: One twin sees everything in green what the other twin sees in red. And vice versa.

How could we distinguish which twin sees the world the correct way? Or is there even a correct way to see the colours? A computer monitor could easily simulate switching green and red, but if one person should describe how that person experiences the colour green versus the colour red, that couldn’t really be conveyed to another person.

My hunch is that humans experience colours very similar. Maybe there are subtle genetic differences (and of course colour blindness). However the qualia just have a default mode. Pain is painful and unpleasant, heat is hot and in a different way unpleasant, salt is salty and everyone experiences that in some almost standardized way (at least as long the brain is healthy and no drugs are used). If only my ideas from the essay above would show a way to explain the qualia, that would be very reassuring for me.

 
burt
 
Avatar
 
 
burt
Total Posts:  16140
Joined  17-12-2006
 
 
 
23 May 2020 20:04
 
silava - 23 May 2020 12:36 PM

First of all, sorry for my individual use of words. I am lacking some philosophical or scientific background, therefore I used e.g. the word assumption quite freely.

Your feedback was very useful for me, much appreciated! I already heard about the qualia issue from Thomas Nagel’s “What is it like to be a bat?” paper, but I didn’t get quite the importance of it. It sounded like some follow up mystery to solve once the roots of consciousness have been discovered.

How about a short thought experiment on the qualia problem?

Imagine twins, genetically identical and with the same upbringing. You couldn’t distinguish them. There is only one difference between them: One twin sees everything in green what the other twin sees in red. And vice versa.

How could we distinguish which twin sees the world the correct way? Or is there even a correct way to see the colours? A computer monitor could easily simulate switching green and red, but if one person should describe how that person experiences the colour green versus the colour red, that couldn’t really be conveyed to another person.

My hunch is that humans experience colours very similar. Maybe there are subtle genetic differences (and of course colour blindness). However the qualia just have a default mode. Pain is painful and unpleasant, heat is hot and in a different way unpleasant, salt is salty and everyone experiences that in some almost standardized way (at least as long the brain is healthy and no drugs are used). If only my ideas from the essay above would show a way to explain the qualia, that would be very reassuring for me.


There are lots of papers on qualia and lots of people try to deal with them in one way or another. Try David Chalmers book on consciousness. Your thought experiment shows the problem, qualia are first person experiences that are not publicly available other than through linguistic agreement. The materialistic attitude is that they are simply identical with neural states, but that doesn’t explain their “feel” or the actual experience. There’s a famous philosophical thought experiment called Mary, the Color Blind Neuroscientist. Mary is a neuroscientist who has studied color visions. The only thing is that she is completely color blind and sees only in black and white. Nevertheless, she knows everything known about color and the neurology of the brain when people experience color. Then one day, perhaps due to a new treatment method, Mary gains the ability to see in color. The question is, since she knows everything about the neural basis of color perception, does she learn anything new?

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  7087
Joined  08-12-2006
 
 
 
27 May 2020 16:46
 
silava - 20 May 2020 01:38 PM

First of all, I start with my assumptions, which I deem to be quite uncontroversial. They will be important during my argumentation:

1. I start from philosophical materialism. Only space, time, matter and energy exist (in a strictly physical sense).
2. It is too difficult for me to explain consciousness for an adult human brain. Therefore, I will try to explain the emergence of consciousness in the history of evolution.
3. Features of living beings need to provide an evolutionary benefit. Otherwise they would be eliminated.
4. The level of consciousness is on a continuum and therefore I assume consciousness evolved slowly and gradually.

What makes you so sure that consciousness is an evolved, adaptive trait? It might be a learned skill made possible by brains that evolved to become sufficiently complex, but for reasons other than consciousness. I.e., not to build a model of reality, but to enable what is essentially a stimulus/response machine to respond to an increasingly large array of different stimuli in more nuanced and refined ways—as would benefit social animals. That would explain a number of things, not least of which is why the model of reality constructed by consciousness seems to be so unreliable.

silava - 20 May 2020 01:38 PM

The human brain generates some kind of world of its own. It is strictly linked to the physical reality…

“Strictly?” I think not, although maybe you mean something different than I do by “strictly.”

 
 
weird buffalo
 
Avatar
 
 
weird buffalo
Total Posts:  182
Joined  19-06-2020
 
 
 
21 June 2020 09:27
 
burt - 23 May 2020 08:04 PM
silava - 23 May 2020 12:36 PM

First of all, sorry for my individual use of words. I am lacking some philosophical or scientific background, therefore I used e.g. the word assumption quite freely.

Your feedback was very useful for me, much appreciated! I already heard about the qualia issue from Thomas Nagel’s “What is it like to be a bat?” paper, but I didn’t get quite the importance of it. It sounded like some follow up mystery to solve once the roots of consciousness have been discovered.

How about a short thought experiment on the qualia problem?

Imagine twins, genetically identical and with the same upbringing. You couldn’t distinguish them. There is only one difference between them: One twin sees everything in green what the other twin sees in red. And vice versa.

How could we distinguish which twin sees the world the correct way? Or is there even a correct way to see the colours? A computer monitor could easily simulate switching green and red, but if one person should describe how that person experiences the colour green versus the colour red, that couldn’t really be conveyed to another person.

My hunch is that humans experience colours very similar. Maybe there are subtle genetic differences (and of course colour blindness). However the qualia just have a default mode. Pain is painful and unpleasant, heat is hot and in a different way unpleasant, salt is salty and everyone experiences that in some almost standardized way (at least as long the brain is healthy and no drugs are used). If only my ideas from the essay above would show a way to explain the qualia, that would be very reassuring for me.


There are lots of papers on qualia and lots of people try to deal with them in one way or another. Try David Chalmers book on consciousness. Your thought experiment shows the problem, qualia are first person experiences that are not publicly available other than through linguistic agreement. The materialistic attitude is that they are simply identical with neural states, but that doesn’t explain their “feel” or the actual experience. There’s a famous philosophical thought experiment called Mary, the Color Blind Neuroscientist. Mary is a neuroscientist who has studied color visions. The only thing is that she is completely color blind and sees only in black and white. Nevertheless, she knows everything known about color and the neurology of the brain when people experience color. Then one day, perhaps due to a new treatment method, Mary gains the ability to see in color. The question is, since she knows everything about the neural basis of color perception, does she learn anything new?

I’m confused.  Is the argument that Mary seeing red is not a brain state?  Or is it arguing that learning something new is not a brain state?  I’m not seeing the nonmaterial explanation for what is happening.

 
burt
 
Avatar
 
 
burt
Total Posts:  16140
Joined  17-12-2006
 
 
 
21 June 2020 10:49
 
weird buffalo - 21 June 2020 09:27 AM
burt - 23 May 2020 08:04 PM
silava - 23 May 2020 12:36 PM

First of all, sorry for my individual use of words. I am lacking some philosophical or scientific background, therefore I used e.g. the word assumption quite freely.

Your feedback was very useful for me, much appreciated! I already heard about the qualia issue from Thomas Nagel’s “What is it like to be a bat?” paper, but I didn’t get quite the importance of it. It sounded like some follow up mystery to solve once the roots of consciousness have been discovered.

How about a short thought experiment on the qualia problem?

Imagine twins, genetically identical and with the same upbringing. You couldn’t distinguish them. There is only one difference between them: One twin sees everything in green what the other twin sees in red. And vice versa.

How could we distinguish which twin sees the world the correct way? Or is there even a correct way to see the colours? A computer monitor could easily simulate switching green and red, but if one person should describe how that person experiences the colour green versus the colour red, that couldn’t really be conveyed to another person.

My hunch is that humans experience colours very similar. Maybe there are subtle genetic differences (and of course colour blindness). However the qualia just have a default mode. Pain is painful and unpleasant, heat is hot and in a different way unpleasant, salt is salty and everyone experiences that in some almost standardized way (at least as long the brain is healthy and no drugs are used). If only my ideas from the essay above would show a way to explain the qualia, that would be very reassuring for me.


There are lots of papers on qualia and lots of people try to deal with them in one way or another. Try David Chalmers book on consciousness. Your thought experiment shows the problem, qualia are first person experiences that are not publicly available other than through linguistic agreement. The materialistic attitude is that they are simply identical with neural states, but that doesn’t explain their “feel” or the actual experience. There’s a famous philosophical thought experiment called Mary, the Color Blind Neuroscientist. Mary is a neuroscientist who has studied color visions. The only thing is that she is completely color blind and sees only in black and white. Nevertheless, she knows everything known about color and the neurology of the brain when people experience color. Then one day, perhaps due to a new treatment method, Mary gains the ability to see in color. The question is, since she knows everything about the neural basis of color perception, does she learn anything new?

I’m confused.  Is the argument that Mary seeing red is not a brain state?  Or is it arguing that learning something new is not a brain state?  I’m not seeing the nonmaterial explanation for what is happening.

The argument is that Mary knows everything there is to know about brain states, in the abstract. But when she sees red, as in having the qualitative experience, either she learns something new (what it’s like) or she does not. Doesn’t matter that having the experience is a brain state, what matters is the nature of her experience. In other words, does knowing everything about a brain state other than actually experiencing that state mean that you also know what the experience would be like? It’s a step beyond asking whether telling a chemist everything about the chemistry of vanilla ice cream allows them to know what vanilla ice cream would taste like.

 
weird buffalo
 
Avatar
 
 
weird buffalo
Total Posts:  182
Joined  19-06-2020
 
 
 
21 June 2020 11:32
 

You brought up the example as a way of explaining how the experience was not material.  You haven’t demonstrated that.  Everything involved is still material.

If the experience of “red” is still a brain state, then the fundamental bits and pieces involved are still all material.

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3465
Joined  26-01-2010
 
 
 
22 June 2020 02:03
 
weird buffalo - 21 June 2020 11:32 AM

You brought up the example as a way of explaining how the experience was not material.  You haven’t demonstrated that.  Everything involved is still material.

If the experience of “red” is still a brain state, then the fundamental bits and pieces involved are still all material.

I’m assuming you are using the material in conjunction with the materialist philosophical position.

Materialism is incomplete if it does not include complete explanations of entirely subjective phenomena, even if the phenomena themselves are not verifiable by anyone but the subject even in principle.

 

 
 
weird buffalo
 
Avatar
 
 
weird buffalo
Total Posts:  182
Joined  19-06-2020
 
 
 
22 June 2020 08:16
 
Poldano - 22 June 2020 02:03 AM
weird buffalo - 21 June 2020 11:32 AM

You brought up the example as a way of explaining how the experience was not material.  You haven’t demonstrated that.  Everything involved is still material.

If the experience of “red” is still a brain state, then the fundamental bits and pieces involved are still all material.

I’m assuming you are using the material in conjunction with the materialist philosophical position.

Materialism is incomplete if it does not include complete explanations of entirely subjective phenomena, even if the phenomena themselves are not verifiable by anyone but the subject even in principle.

This seems like a non sequitur.  I asked how the Mary example included anything that was nonmaterial.  Either it demonstrates something nonmaterial, or it doesn’t.

Do you think the Mary example demonstrates something that is nonmaterial?

 
burt
 
Avatar
 
 
burt
Total Posts:  16140
Joined  17-12-2006
 
 
 
22 June 2020 10:53
 
weird buffalo - 21 June 2020 11:32 AM

You brought up the example as a way of explaining how the experience was not material.  You haven’t demonstrated that.  Everything involved is still material.

If the experience of “red” is still a brain state, then the fundamental bits and pieces involved are still all material.

You miss the point. Does Mary learn anything new from this experience. That is a question. If you answer no, then you are taking an identity position, if you answer yes they you are agreeing that there is something about direct experience that cannot be explained simply by neural states, even if the experience arises from those states. From your comment it seems that your answer is no, so if you were a neuroscientists you would then have to explain how it is that the experience of seeing red can arise from neural interactions alone. The cop out is to say “well, it just does.”

 
weird buffalo
 
Avatar
 
 
weird buffalo
Total Posts:  182
Joined  19-06-2020
 
 
 
22 June 2020 14:45
 
burt - 22 June 2020 10:53 AM
weird buffalo - 21 June 2020 11:32 AM

You brought up the example as a way of explaining how the experience was not material.  You haven’t demonstrated that.  Everything involved is still material.

If the experience of “red” is still a brain state, then the fundamental bits and pieces involved are still all material.

You miss the point. Does Mary learn anything new from this experience. That is a question. If you answer no, then you are taking an identity position, if you answer yes they you are agreeing that there is something about direct experience that cannot be explained simply by neural states, even if the experience arises from those states. From your comment it seems that your answer is no, so if you were a neuroscientists you would then have to explain how it is that the experience of seeing red can arise from neural interactions alone. The cop out is to say “well, it just does.”

You haven’t demonstrated that this is true.  You are asserting it, but you have not given evidence that direct experience is not simply a neural state.

Reading about a neural state would produce one subset of neural states.  Seeing a color will produce a different subset of neural states.  The fact that these are different subsets of neural states should be hardly surprising.  Its about as novel as saying that seeing the color red and having sex produce different neural states.  If I tried to pretend that that was profound, you’d laugh me out of the thread.

Edit: relying on the argument that “if materialists can’t explain it, therefore nonmaterial!” is the same reasoning creationists give against evolution and abiogenesis.

[ Edited: 22 June 2020 15:49 by weird buffalo]
 
burt
 
Avatar
 
 
burt
Total Posts:  16140
Joined  17-12-2006
 
 
 
22 June 2020 18:59
 
weird buffalo - 22 June 2020 02:45 PM
burt - 22 June 2020 10:53 AM
weird buffalo - 21 June 2020 11:32 AM

You brought up the example as a way of explaining how the experience was not material.  You haven’t demonstrated that.  Everything involved is still material.

If the experience of “red” is still a brain state, then the fundamental bits and pieces involved are still all material.

You miss the point. Does Mary learn anything new from this experience. That is a question. If you answer no, then you are taking an identity position, if you answer yes they you are agreeing that there is something about direct experience that cannot be explained simply by neural states, even if the experience arises from those states. From your comment it seems that your answer is no, so if you were a neuroscientists you would then have to explain how it is that the experience of seeing red can arise from neural interactions alone. The cop out is to say “well, it just does.”

You haven’t demonstrated that this is true.  You are asserting it, but you have not given evidence that direct experience is not simply a neural state.

Reading about a neural state would produce one subset of neural states.  Seeing a color will produce a different subset of neural states.  The fact that these are different subsets of neural states should be hardly surprising.  Its about as novel as saying that seeing the color red and having sex produce different neural states.  If I tried to pretend that that was profound, you’d laugh me out of the thread.

Edit: relying on the argument that “if materialists can’t explain it, therefore nonmaterial!” is the same reasoning creationists give against evolution and abiogenesis.



You miss the point, I don’t have to give evidence one way or the other, the entire Mary thought experiment is intended to raise the question, not necessarily provide an answer. It differentiates between identity beliefs; i.e., there is nothing extra that Mary learns because she already knew everything about the neural states involved in seeing colors; and panpsychism states; i.e., Mary learns something new that no understanding of only the neural states could bring. In other words, she already knows all that is involved in the neural states involved in seeing red, she just hasn’t seen it (had the actual experience) herself so she has not had that particular neural state before. So the question is, when she does experience that state, does she learn anything that she didn’t already know?

 
weird buffalo
 
Avatar
 
 
weird buffalo
Total Posts:  182
Joined  19-06-2020
 
 
 
22 June 2020 21:11
 
burt - 22 June 2020 06:59 PM


You miss the point, I don’t have to give evidence one way or the other, the entire Mary thought experiment is intended to raise the question, not necessarily provide an answer. It differentiates between identity beliefs; i.e., there is nothing extra that Mary learns because she already knew everything about the neural states involved in seeing colors; and panpsychism states; i.e., Mary learns something new that no understanding of only the neural states could bring. In other words, she already knows all that is involved in the neural states involved in seeing red, she just hasn’t seen it (had the actual experience) herself so she has not had that particular neural state before. So the question is, when she does experience that state, does she learn anything that she didn’t already know?

Okay, so if this example is not providing us with an answer, then it does not support the idea that there is a nonmaterial answer to the question.

I also think that the example is attempting to conflate the map for the place.  If I study a map of China without ever having been there, I will gain new information once I actually go to China.  The example is taking two separate categories of information, and equivocating between them in order to make a deepity.

[ Edited: 22 June 2020 21:14 by weird buffalo]
 
burt
 
Avatar
 
 
burt
Total Posts:  16140
Joined  17-12-2006
 
 
 
22 June 2020 22:32
 
weird buffalo - 22 June 2020 09:11 PM
burt - 22 June 2020 06:59 PM


You miss the point, I don’t have to give evidence one way or the other, the entire Mary thought experiment is intended to raise the question, not necessarily provide an answer. It differentiates between identity beliefs; i.e., there is nothing extra that Mary learns because she already knew everything about the neural states involved in seeing colors; and panpsychism states; i.e., Mary learns something new that no understanding of only the neural states could bring. In other words, she already knows all that is involved in the neural states involved in seeing red, she just hasn’t seen it (had the actual experience) herself so she has not had that particular neural state before. So the question is, when she does experience that state, does she learn anything that she didn’t already know?

Okay, so if this example is not providing us with an answer, then it does not support the idea that there is a nonmaterial answer to the question.

I also think that the example is attempting to conflate the map for the place.  If I study a map of China without ever having been there, I will gain new information once I actually go to China.  The example is taking two separate categories of information, and equivocating between them in order to make a deepity.

Well, it’s a thought experiment that has excited a great deal of analysis from professionals who work in this area, perhaps they are wrong, but I doubt it. You’re right, it’s taking two different categories of information, but what it’s asking is whether or not one category can be reduced to the other. What it does is raise the question: is there something about the nature of an experience that cannot be explained only by the neural events that produce the experience? There are a bunch of other of these sort of thought experiments about and the way people take it depends on their philosophical commitments. Paul Churchland, at least 20 years ago which is the last time I read anything of his, believed that “I see red” was just a shorthand for an extended description of all the neurons that were firing during the experience. David Chalmers thinks that qualities are somehow embedded in nature or in consciousness. Roger Penrose thinks that mathematical ideas exist in a Platonic reality that is non-material. Nobody knows for sure.

 
 1 2 3 >  Last ›