A naturalistic approach to consciousness

 
silava
 
Avatar
 
 
silava
Total Posts:  4
Joined  14-05-2020
 
 
 
20 May 2020 13:38
 

I can still remember when I read the two blog posts “The Mystery of Consciousness I and II” in 2011. It is a very fascinating topic which immediately got me hooked. However, after reading both blog entries, I was left very frustrated. For all discoveries that science has made in the past centuries, for the hard problem of consciousness there had been no visible progress.

How is it that unconscious events can give rise to consciousness? Not only do we have no idea, but it seems impossible to imagine what sort of idea could fit in the space provided. (Sam Harris,  The Mystery of Consciousness II)

In some subconscious areas of my brain these sentences started to circulate and never let go. Fast forward nine years and let me now try to explain a new approach, starting from the obviously unconscious and proceed via evolutionary processes to consciousness. Even though Sam is very pessimistic about this:

The problem, however, is that the distance between unconsciousness and consciousness must be traversed in a single stride, if traversed at all. (Sam Harris, The Mystery of Consciousness II)

First of all, I start with my assumptions, which I deem to be quite uncontroversial. They will be important during my argumentation:

1. I start from philosophical materialism. Only space, time, matter and energy exist (in a strictly physical sense).
2. It is too difficult for me to explain consciousness for an adult human brain. Therefore, I will try to explain the emergence of consciousness in the history of evolution.
3. Features of living beings need to provide an evolutionary benefit. Otherwise they would be eliminated.
4. The level of consciousness is on a continuum and therefore I assume consciousness evolved slowly and gradually.

For simplicity, I use the word consciousness as a synonym for self-awareness. This is only a rough explanation which cannot go into much detail.

Let me begin with the first organisms. They were probably quite simple and could only reproduce (and compared with modern life quite slowly and clumsily). I assume they even had no metabolism, because some supportive environment provided everything they needed. Early life can then be assumed to be purely organic machines, simply obeying the natural laws. While this is definitely a speculation from my side, it could be confirmed by lab experiments with simple artificial life. My point is that there is no need or justification to assume any consciousness for such simple organisms.

Things look more complex for e.g. modern bacteria. They can already possess some primitive sensing organs, e.g. to detect molecules and react by relocating their position or ingesting other bacteria. This kind of behaviour is effectively hard coded in their genes. Learning and memory happens on a purely genetic level, since they don’t have a nervous system. Again, this is speculation from my side. Nevertheless the behaviour of such organisms can still be explained with a purely mechanistic view and without them having any consciousness.

The next step is crucial. It is not about consciousness yet, it is about world models. This might sound like some digression first, but it is a very important concept to be able to explain consciousness later.

When progressing from single cell organisms to multicellular organisms, some of them develop neurons. The go to animal for scientists is Caenorhabditis elegans, a flatworm and one of the simplest organisms to have a nervous system. It has got exactly 302 neurons and scientists have mapped all their connections in a connectome.
So what is the evolutionary benefit for Caenorhabditis elegans of having these neurons? What do they do? The simplest answer is: A nervous system builds a world model.
At a conceptual level, the nervous system gets input from sensory organs (e.g. temperature, smells, brightness/vision, sounds, tactile input, pain, ...), processes that input and generates some output for adequate reactions. Whether a flatworm or a human, that concept is universal, but of course the world models differ vastly in detail.
Natural selection rewards more realistic world models and punishes errors in the world model. This creates a direct driving force to improve the world models up to some local optimum, depending on the species and its environment. As the word “model” implies, lots of unnecessary details are omitted.

For a nervous system, learning and memory is now dynamically possible. They do not equate with hard coded information in the genes any more. However, such world models are just some sort of abstraction. They do not exist in a physical sense. We cannot see them or verify their existence independently. Therefore, I am tempted to even think of Caenorhabditis elegans as an organic machine. World models help to understand their behaviour, but there is still no consciousness required.

From such a starting point evolution allows gradual increments to the complexity of world models. Some bigger animals can afford to sustain more neurons and they in turn help them to survive in more complex and dynamic environments.

The world models start to differentiate better between the inanimate environment, predators and prey. Better sensory organs allow to better identify risks and opportunities, in turn requiring more complex world models. This is no inevitable development, as shown by all the species without a nervous system. But in the animal kingdom a nervous system becomes the gold standard for adapting to dynamic environments and thus to survive as a species.

Slowly but constantly some new concepts were introduced into the world models of larger animals. Behaviour of predators and behaviour of prey can be anticipated. Then social animals also anticipate the behaviour of their own group, raising the complexity even further. Also the properties of the individual itself are considered more and more in the individual’s world model: What is my size, weight, dimensions, speed, power, endurance, ... In parallel to all of this, the individual’s world model is used for simulations. If there are several possible alternatives, what is their likely outcome? Simulations are risk free, and evolution takes care they are constantly honed with reality.

Especially social animals have a driving force for more complex world models. Should I cooperate in this situation? What will the others do next? What is my current status in the group? Who has helped me in the past? Etc. These are very high-level simulations in a nervous system. The world model has already modelled all kinds of inanimate artefacts, other organisms, animal behaviour and strategies which comprise the environment. And probably somewhere around this point, the nervous system slowly starts with the innovative next step:

The world model incorporates itself. Especially for social animals, there is not only friend #1, friend #2 and foe #3. There is also oneself. Each individual has got its own world knowledge, partly separate from the others. For the world model to be effective in such a hypercomplex environment, it needs to consider itself by modelling itself as part of the world model. This is a brand new concept, effectively a model of a model.

The world model realizes that it exists, that it is separate from the environment and it discovers in part its properties (e.g. that it has got agency and that it can simulate the outcome of alternatives). To rephrase Descartes: “I am thinking about myself, therefore I am.” Slowly, the world model becomes self-aware or conscious. As usual with evolution, that is probably some slow process with a slow progress over many many generations. The self-aware world model of a social species was now ready to use persuasion, bribe, retaliation, deception, lies and many other advanced concepts to influence other individuals.

Carried over to human brains, this all could now be summarized into a layered architecture:

1. The basis layer is the human brain with its firing neurons and synapses. Input is processed, the neuronal network gets updated and output generated. This is the hardware which can be studied in all technical detail.
2. The first abstraction layer is the world model each brain generates. It cannot be visualized or studied directly, because it does not exist in any physical manifestation. It might be compared to an operating system running on the brain hardware.
3. The second abstraction layer is the model of the world model, as created by the world model itself. This implies no endless recursion, just one abstraction upon another. Consciousness runs on the platform of the (necessary) world model. It is itself evolutionarily necessary to survive in a complex dynamic environment, but it also does not manifest in any physical form.

The human brain generates some kind of world of its own. It is strictly linked to the physical reality, but all our (per definition subjective) experiences are exclusively happening within this brain-generated world.

Probably my deliberations were too simplistic. In effect, I am arguing at least against Sam Harris and Yuval Noah Harari, because after the first formulation of my ideas I found this quote from Yuval where he ridicules something like my idea:

“As the brain tries to create a model of its own decisions, it gets trapped in an infinite digression, and abracadabra! Out of this loop, consciousness pops out.” (Yuval Noah Harari, Homo Deus, page 99).

Maybe I have hit some blind spot, but I still cannot see any error in my thinking. Therefore I am inviting feedback from this forum.

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3429
Joined  26-01-2010
 
 
 
21 May 2020 00:55
 

What you are missing is the “what does it feel like to be something” aspect of consciousness. That is hard to explain when starting from physicalist assumptions, as you do.

A more recent development is a revival of classical idealism, or at least getting close to it. Bernardo Kastrup is one who I’ve read who goes whole-hog for idealism, and doesn’t hesitate to call it idealism. Others have not gone quite so far, instead opting for some form of panpsychism.

A hint with regard to your theory is that the things you start with, atoms and forces and such, the subject matter of physics, are themselves objects in a world model. To get at reality prior to any world model is very difficult; you have to start with the basic stuff of experience, which are called qualia, and are entirely subjective. Once there is intersubjective agreement on qualia, then there is the start of both a theory of objective reality and a world model.

Purely biochemical interactions may be thought of as the beginnings of intersubjectivity and of world models, even if there is no nervous system, because they involve an adaptation to local physical conditions. If the interactions are persistent enough to form similar instances of interactions, i.e., primitive replication, then a primitive world model may be said to have emerged. Such a world model is not in declarative form, but in normative form, because it comprises a set of recipes (for want of a better term) for interactions that enable the object that counts those interactions as attributes to persist and replicate.

World models in a declarative sense cannot happen until there is an additional symbolic layer added to the interaction recipes. Think of this as a way of designating or referring to recipes and interactions that is distinct executing the recipes or making the interactions happen. This could mark the beginning of what some people define as consciousness.

 
 
burt
 
Avatar
 
 
burt
Total Posts:  16097
Joined  17-12-2006
 
 
 
21 May 2020 09:27
 
silava - 20 May 2020 01:38 PM

I
First of all, I start with my assumptions, which I deem to be quite uncontroversial. They will be important during my argumentation:

1. I start from philosophical materialism. Only space, time, matter and energy exist (in a strictly physical sense).
2. It is too difficult for me to explain consciousness for an adult human brain. Therefore, I will try to explain the emergence of consciousness in the history of evolution.
3. Features of living beings need to provide an evolutionary benefit. Otherwise they would be eliminated.
4. The level of consciousness is on a continuum and therefore I assume consciousness evolved slowly and gradually.

Poldano gave a good response, especially pointing out the qualia problem for any strictly materialist attempt to explain consciousness. I’ll second that, and say that my own preferred view is that consciousness must be assumed a priori (not human consciousness) in the same way that physicists assume space-time. Here, though, I’ll offer some comments on your assumptions.
1. Materialism has problems when asked “was is matter?” Eventually, if that question is pursued to the limit it ends up having to admit that we don’t really know, but it’s some form of energy. But is energy material? What about potential energy? And so on.
2. isn’t an assumption, it’s a program
3. This isn’t necessarily true (it is what’s called the adaptationist hypothesis). Sometimes a feature is a side-effect of some other feature. And some feature may be selected for but bring with it another aspect that is selected against. There are also spandrels, accidental features.
4. This is the gradualist approach to evolution. Other approaches seem more likely today: punctuated equilibrium, epochal evolution.

None of these comments go against what you’re trying to do, but they do point to the degree of complication involved. As I said, my view is that consciousness must be assumed a priori and the question reframed in terms of the evolution of nervous systems capable of supporting an internal world of the sort that we experience. That still has to deal with the question of qualia and the internal experiential nature of consciousness but I think it’s an easier way.

 
silava
 
Avatar
 
 
silava
Total Posts:  4
Joined  14-05-2020
 
 
 
23 May 2020 12:36
 

First of all, sorry for my individual use of words. I am lacking some philosophical or scientific background, therefore I used e.g. the word assumption quite freely.

Your feedback was very useful for me, much appreciated! I already heard about the qualia issue from Thomas Nagel’s “What is it like to be a bat?” paper, but I didn’t get quite the importance of it. It sounded like some follow up mystery to solve once the roots of consciousness have been discovered.

How about a short thought experiment on the qualia problem?

Imagine twins, genetically identical and with the same upbringing. You couldn’t distinguish them. There is only one difference between them: One twin sees everything in green what the other twin sees in red. And vice versa.

How could we distinguish which twin sees the world the correct way? Or is there even a correct way to see the colours? A computer monitor could easily simulate switching green and red, but if one person should describe how that person experiences the colour green versus the colour red, that couldn’t really be conveyed to another person.

My hunch is that humans experience colours very similar. Maybe there are subtle genetic differences (and of course colour blindness). However the qualia just have a default mode. Pain is painful and unpleasant, heat is hot and in a different way unpleasant, salt is salty and everyone experiences that in some almost standardized way (at least as long the brain is healthy and no drugs are used). If only my ideas from the essay above would show a way to explain the qualia, that would be very reassuring for me.

 
burt
 
Avatar
 
 
burt
Total Posts:  16097
Joined  17-12-2006
 
 
 
23 May 2020 20:04
 
silava - 23 May 2020 12:36 PM

First of all, sorry for my individual use of words. I am lacking some philosophical or scientific background, therefore I used e.g. the word assumption quite freely.

Your feedback was very useful for me, much appreciated! I already heard about the qualia issue from Thomas Nagel’s “What is it like to be a bat?” paper, but I didn’t get quite the importance of it. It sounded like some follow up mystery to solve once the roots of consciousness have been discovered.

How about a short thought experiment on the qualia problem?

Imagine twins, genetically identical and with the same upbringing. You couldn’t distinguish them. There is only one difference between them: One twin sees everything in green what the other twin sees in red. And vice versa.

How could we distinguish which twin sees the world the correct way? Or is there even a correct way to see the colours? A computer monitor could easily simulate switching green and red, but if one person should describe how that person experiences the colour green versus the colour red, that couldn’t really be conveyed to another person.

My hunch is that humans experience colours very similar. Maybe there are subtle genetic differences (and of course colour blindness). However the qualia just have a default mode. Pain is painful and unpleasant, heat is hot and in a different way unpleasant, salt is salty and everyone experiences that in some almost standardized way (at least as long the brain is healthy and no drugs are used). If only my ideas from the essay above would show a way to explain the qualia, that would be very reassuring for me.


There are lots of papers on qualia and lots of people try to deal with them in one way or another. Try David Chalmers book on consciousness. Your thought experiment shows the problem, qualia are first person experiences that are not publicly available other than through linguistic agreement. The materialistic attitude is that they are simply identical with neural states, but that doesn’t explain their “feel” or the actual experience. There’s a famous philosophical thought experiment called Mary, the Color Blind Neuroscientist. Mary is a neuroscientist who has studied color visions. The only thing is that she is completely color blind and sees only in black and white. Nevertheless, she knows everything known about color and the neurology of the brain when people experience color. Then one day, perhaps due to a new treatment method, Mary gains the ability to see in color. The question is, since she knows everything about the neural basis of color perception, does she learn anything new?

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  7015
Joined  08-12-2006
 
 
 
27 May 2020 16:46
 
silava - 20 May 2020 01:38 PM

First of all, I start with my assumptions, which I deem to be quite uncontroversial. They will be important during my argumentation:

1. I start from philosophical materialism. Only space, time, matter and energy exist (in a strictly physical sense).
2. It is too difficult for me to explain consciousness for an adult human brain. Therefore, I will try to explain the emergence of consciousness in the history of evolution.
3. Features of living beings need to provide an evolutionary benefit. Otherwise they would be eliminated.
4. The level of consciousness is on a continuum and therefore I assume consciousness evolved slowly and gradually.

What makes you so sure that consciousness is an evolved, adaptive trait? It might be a learned skill made possible by brains that evolved to become sufficiently complex, but for reasons other than consciousness. I.e., not to build a model of reality, but to enable what is essentially a stimulus/response machine to respond to an increasingly large array of different stimuli in more nuanced and refined ways—as would benefit social animals. That would explain a number of things, not least of which is why the model of reality constructed by consciousness seems to be so unreliable.

silava - 20 May 2020 01:38 PM

The human brain generates some kind of world of its own. It is strictly linked to the physical reality…

“Strictly?” I think not, although maybe you mean something different than I do by “strictly.”