1. I, too, believe that conscious experience motivates (or not) but how does it do it? Does it reach back into the brain in order to affect it? As follows ...
Brain—> pleasure—> Brain
Surely, you would have to posit some mechanism wherein this strange ethereal phenomenon of consciousness impacted itself upon neurons.
But it’s hard enough trying to figure out this ... Brain—> pleasure
2. As you probably know, a zombie-robot without consciousness could, in theory, say and do everything that humans do; i.e. consciousness does not, in theory, appear to be necessary to motivate people.
What if, in order to say and do everything that humans do, a zombie robot had to become conscious? We don’t know yet how to disprove this, since consciousness as we currently define it depends very much on subjectivity, and subjectivity is still scientific no-man’s land. In a similar way, we cannot prove that an unconscious zombie would be able to say and do everything that humans do. Assuming that it can assumes that everything that humans do is (1) known, and (2) computable. Everything that humans do is not yet known, nor has it yet been proven that everything that humans do is computable even in principle. That’s the crux of the philosophical zombie problem, right there.
Consciousness has an important function: it is the motivational faculty of the brain. To assume – as Christof Koch does – that consciousness is useless is, quite frankly, absurd. Absurd assumptions lead to absurd conclusions. Wouldn’t you agree?
I’m not sure if I should comment on this or not. Awareness is an evolutionary adaptation to overcome problems with the way information is organized in the brain. This information structure is itself an adaptation to overcome speed limitations. In other words, without this structure the brain would be too slow so an organism would not be very fit when dealing with other organisms with brains. This structure then creates problems. Awareness seems to be the simplest solution. It’s possible that some other theoretical solution exists but I haven’t found one. You can push awareness to some complexity if you avoid self-reference because this also causes problems. However, you do reach a point where self-awareness is required to get past these limitations. Self-awareness deals with seeing yourself as separate from the environment rather than just an aspect of the environment.
Emotions do seem to be directly related to awareness. When you have a simple, non-aware organism you can directly link brain states to actions. However, to gain more flexibility these are expanded into contextual states supplemented with emotion. Emotions work primarily in bypassing abstract manipulation. In other words, if emotions were only abstract constructs then you could ignore them.
If you consider an organism in terms of maximizing opportunities then that organism has an advantage if it is able to create new decision classes. However, if your brain can do this then it also becomes non-functional. It will tend to either become trapped in a single context or bounce around from context to context. The solution seems to be the development of a conscious cycle which maintains focus and keeps the amount of information small enough to handle. The same flexibility that allows you to create new decision classes also lets you form classes about yourself. So self-awareness is unavoidable. The phenomenon that we call consciousness is the part that actively maintains the current contextual state.
This probably sounds like gibberish. Basically, I see awareness and consciousness as necessary adaptations to allow expanded decision choices while maintaining adequate brain speed. I don’t see anything mystical in it. Nor does it appear to be simply an emergent property of greater complexity.
We may be barking up the same tree. I’ll try to pay attention to your posts in the future.
I have a slightly different take on emotions than what you stated (and I made bold). I think emotions evolutionarily preceded abstract manipulation, as evidenced by which brain structures are associated with emotions and which are associated with abstract manipulation. Given the earlier evolutions of emotions, I think of abstract manipulation as contingent replacements for emotions, useful when the amount of time available for the extended processing that abstract manipulation requires makes its greater precision and flexibility a non-life-threatening alternative. In this view, abstract manipulation very likely coevolved with increased memory, increased energy expenditure by the brain at the expense of muscular strength, and the human addiction to explanation.
The more I’ve thought about this I’ve been forced to the conclusion that the panpsychists are at least partially right - ie. that consciousness as we think of it is the weaving together of certain primitives, lots of them, that are inherent to all of nature. Without that we’re stuck defending something like strong emergence, or stranger still, ideas that would express that in a dead matter universe there’s something distinctly special about neurons. Even if Penrose and Hameroff were correct with their Orch OR suggestions regarding microtubules that’s still not making neurons as special as the suggestion that wholly nonexistent consciousness comes into existence when you have your first neuron, cluster of neurons, or the right quantity of neurons to spin up a particular structure. In effect that’s like saying that there’s a finite number you can multiply by zero and get a non-zero result.
As far as what my own take is - I’m finding myself increasingly somewhere between functionalism and ontic structural realism. Also, if I were really to go off the deep end, I do consider the possibility from time to time that our planet could operate in some capacity as a very bizarre sort of solid-state hard drive.
Thinking about it a little more even after listening to AMA10:
With the OP someone mentioned the idea that consciousness probably has to have an evolutionary advantage because if it didn’t the laws of biological parsimony wouldn’t allow it let alone seem to even select for it.
I have a further problem with the basis of this. Even to bypass the question of what consciousness actually is we get to see that there’s a suggestion that it’s solving a problem. Emotion and pain seem to be awarenesses of misalignments of sorts internal to the perceived well being of that living structure.
In nature, assuming that it’s not conscious, consciousness would be solving a problem that only consciousness (or the existence of it) provides. That’s a really bizarre quandary to be in and I can’t help but think that there’s a necessary piece of the puzzle that’s getting missed or overlooked.