1 2 3 > 
 
   
 

Tired of illusions

 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
25 November 2018 12:37
 

The following is an attempt to reconcile some philosophical “mind”-worms in a landscape of ideas that can seem to either reference seemingly unfounded mysticism, ignore common experience, or paralyze with nihilism.
For example:
You are the universe and the universe is you.
  - Really? It don’t quite feel the same. Am I missing something?
There is a spirit within your body that endows you with free will and leaves your body when you die.
  - That seems anachronistic
Free will is an illusion.
  - Maybe true but if so then is everyday life an annoying meaningless existence?
Everything is conscious
  - Again, maybe its true, but doesn’t stating this so broadly seems to diminish consciousness to meaninglessness

Everything that we perceive, recognize, think, and do are abstractions. There is an objective reality; that is, there is something that is beyond our senses. But it has no intrinsic perforated dots; Every abstraction involves a partitioning of that all-encompassing reality, and every partition is in a sense arbitrary and subjective. There are different levels of abstraction. As humans, we have certain inescapable partitioning tendencies which are linked to the core of our biological machinery. Because we have these common biological tendencies, we have a ground consensus of abstraction which is the basis of our languages. It is by these abstractions that we can build a system with truth and falseness, to evaluate abstractions with a lens of internal consistency. From here on, we will assume this base level of human internal consistency that while we would be hard-pressed to define in completeness, we innately can recognize.

With this base level of abstraction, there is a “self” that we carve out of the universe. Again this carving-out is arbitrary, and what is carved out is not self-sufficient. There can be no “self” that persists without its surrounding and there is no “correct” separation of the two. Every concept including “correctness” is subjective at a deeper sense; however, this is not to say that we cannot make any separation. For we do at nearly every moment. There is a subset of the universe that we have the ability to sense within the limits of our consciousness, and there is a subset of the universe that we have the ability to affect. The union of these two perceived sets is what we generally call our “self”. That is, if we cannot conceive of perceiving and changing something then it is outside of our general concept of “self”. This definition becomes nebulous with the concept of the subconscious which we generally attribute to the self; but we can account for this with the idea that we can conceive the possibility of a mechanism (one that albeit is not currently known) by which we could bring the unconscious into conscious purview and control. Again, these limits are arbitrary, but mandated by our biological sensing hardware. While our current understanding seems to indicate that the universe could be completely continuous, our human perception does seem to have bounds and make distinctions.

Now that we have established the “self” as a subset of reality, we can try to establish what we generally mean by “true” and “false”. When we say something is “true”, we generally mean that the abstraction that we have created in our minds represents something that exists or has existed outside of our minds, such that in the right place or time we could look upon it and activate that respective concept in our mind’s eye without having to squint it into perception. When we say something is “false” we mean that the abstraction we have conjured up is suggested to exist only as a symbol and not represent something that exists outside of conceptualization. “True” and “false” are in this way subjective. There is no objective truth because if reality is partitioned in a different way, it would be possible that our own mechanisms of perception might deem what we currently regard as “false” to be “true” and vice-versa. At this point we have definitions of “self”, “true”, and “false”. One can note that the partition of “self” necessitates the concept of “non-self” and the concept of “true” necessitates the concept of “false”.

From here on, we can hypothesize an unfounded premise that we are machines that sense a portion of the universe, partition this information into a variety of hierarchies of abstraction, attend to a subset of this processed information, and produce actions. Let’s use this premise to see if we can answer some philosophical questions to some level of satisfaction. The subset of this information partitioning that we are aware of while it is occurring forms a rough estimate of what we commonly refer to as “consciousness”. By these definitions and the aforementioned hypothesis that we are a special kind of machine, it would not be an oxymoron to say that we are both “mechanical” and “conscious”. But here comes a bigger question: do we have “free will”?

Before getting to “free will” lets define the terms “power” and “choice”. “Power” is our ability to affect a change in the universe. The flip side of “power” is that there are things that we can be conscious of that we cannot change. Furthermore “power” can be divided into “internal power” which is an ability to make a change in the subset of the universe delineated as “self”, and “external power” which is an ability to make a change in the subset of the universe delineated as “non-self”. When we make a “choice” we have conceptualized multiple exclusive potential future states of the universe and we consciously try to use our “power” to realize one over another. If we have not conceptualized multiple potential states, then we are not making a “choice”. If we affect a change but we are unconscious of our own “power”, then we are not making a “choice”. If we are not using our “power” to realize one of the conceived potential states over another, then we are not making a “choice”. Once we have made a “choice” that is our “will”. Note that the “power” need not be “true”; as long as we think that we have the ability to affect change we can make a “choice”

What is “freedom”? “Freedom” can be thought of as the fraction of our “will” that we have “true” “power” to realize (into reality). In this way “freedom” entails “free will” and we can define a concept of “non-freedom” as the fraction of our “will” that we do not have “true” “power” to realize. I would make a claim that some amount of “suffering” that we experience as an agent is the amount of “non-freedom” we experience, and that by experiencing less of this we can induce more “happiness”. This makes room for several corollaries: If there is no perceived “choice”, then there is no “will”, and there is less “non-freedom” / “suffering” (via decreasing the denominator in aforementioned fraction). If we make no attempt to use our “power” to realize a conceptualized potential “state”, then we too have not made “choice” and there is no “non-freedom” / “suffering”. Another method to increase “freedom” is to gain more “power” (increasing the numerator in aforementioned fraction); This the traditionally understood method for attaining more freedom. I believe this is one of the wisdoms of “enlightened” humans. they have conditioned their mental machinery to reduce “will” and induce a state with very low “non-freedom” / “suffering” and in doing so can use their more available “consciousness” to enjoy the universe. Most agents have the faulty tendency to prioritize “external power” over “internal power”, while there is likely more latent amount of the latter that can be procured to alleviate “suffering” / produce “happiness”

When we query the existence of “free will” we are actually expecting a satisfying response to answer other questions too. These include: Is everything predetermined? Do I have any true choices to make? The latter question has mostly been answered already. In our framework of “self”/“non-self” there is “power” and there are “choices”, and even choosing not to make a choice is a “choice” in itself. So in this system there are consequential choices to be made. I think the idea of “predetermination” is false. I think that the “past” does exist, albeit as an abstraction, but I think the mistake is believing that the “past” affects the “present” without an agent converting the “past” into “present” actions. In this system of thought, the “past” is simply the expression of representations of portions previous states of the universe, and while there may be correlations to be made with abstractions of the “present” there is no causation (outside of the mechanism of an agent). If the “past” is the tail of a comet shooting across the night sky, few would believe that it is driving the direction of the comet. In this way nothing is “predetermined”. Furthermore the idea is that if the universe is “predetermined”, then one could conceive of an agent that can predict without a doubt the future state of the universe and use this information to inform its actions; however, if an agent somehow had this ability to foresee the future then it would be unable to make a “choice” unless it broke the very nature of predetermination by conceiving of alternative potential future states. I would further push that an entity that cannot make a “choice” is not an “agent”

Another interesting question that arises from the premise that a human is a special type of machine is: For what purpose do i have consciousness if I am a machine? Why am I not a mindless zombie? This is further out of the territory that this current framework can answer but we can make some guesses. I think that the use of “consciousness” in this context is “self-consciousness” rather than “consciousness” in the broader sense defined above. “Consciousness” in the broader sense used above would be an implication of information-processing or perception involved in any information-processing machine, and the degree of “consciousness” is in some ways linked to the breadth of perception, level of abstraction, and flexibility with which the data can interact with itself. “Self-consciousness” is the idea that there is conceptualization of a “self” and “choices”. It may be that to survive as the type of machines that we are, we needed to make sufficiently abstracted “choices” that necessitated development of a concept for “self” and a mechanism of attention to process our internal abstractions.

Yet another question that comes to mind is where dose “qualia” fit into all of this? (that part of perception that exists independent of “choice” and seemingly abstraction itself). I feel the sensation of “qualia” is most evident in mental states achieved through meditation, when attention is shifted from the processes operating within the “self” to that of “non-self”. In this case, “qualia” is attention along a less abstracted (more grounded) layer of data, maybe only with the vaguest suggestion of associations. Perhaps “qualia”  is a term that we use to contrast attention on highly abstracted concepts that is far removed from “non-self”, and “qualia” would not be applicable to machinery without ability to make higher abstractions. This is sculpted as I try to fit my current intuitions: does a fellow mammal like a dog experience “qualia”? probably; how about a plant? less likely; how about a rock? even less likely.

 
Brick Bungalow
 
Avatar
 
 
Brick Bungalow
Total Posts:  5026
Joined  28-05-2009
 
 
 
04 December 2018 09:01
 

I am sympathetic to some of your concerns. I think it’s important to acknowledge those writers and thinkers who have employed the concept of illusion with proper context and background and give them credit for doing so. When someone with a neuroscience background says that consciousness is an illusion (from my limited reading) they normally qualify the statement carefully. It’s not illusory in the sense of being imaginary or fake or duplicitous. It’s illusory in the sense that the total narrative is constructed. Illusory in the sense that the intelligibility of our perceptions is knitted together certain evolved facets of cognition.

At the same time in pop culture variations of the same conversation the word illusion IS used in this former sense as a way of hand waving. It’s enfolded in the larger slide toward euphemism that is a detriment to language in general.

Hopefully this is a helpful distinction.

[ Edited: 04 December 2018 09:05 by Brick Bungalow]
 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
15 December 2018 08:46
 

The focus on “illusion” in the title of this post, I think, was a mistake on my part, in terms of seeming too negative or dispairing. I meant to express more of an intellectual frustration that born my line of reasoning. The problem, so to speak, isn’t that I don’t believe the assertion of the illusion of free will. I think it makes almost too much sense at a global/apersonal/scientific perspective but seems at odds with the typical daily perspective. This is an example itself of what you bring up. Our modern understanding of our mind does seem to indicate that we are a kluge of evolved processes (that can have different and even contradictory perspectives on a situation) but by an unelucidated mechanism they are unified into a seemingly resolved stream.  Is there a theory that can explain it. Maybe, not. My attempt in this post when I review it seems to lose itself in symbolic expression. I was basically starting at the most global/impersonal perspective and seeing if I could find my way to the latter perspective. I think I completely missed the mark on even the term “free will”, in a silly way breaking it into two terms, losing the essence of the term altogether in the sense that the paradox is expressed. My paragraph about time and causality also doesn’t make sense to me anymore and makes an unfounded distinction.

In my line of reasoning, the “will” is a derivative of both non-self and self. Both self and non-self are so mutually dependent that neither can have freedom. Maybe the conflict can be summarized as follows: we intuitively define “freedom” as dependent on the concept of “will” (ratio: “power or ability” / “will”) so it would be circular to try to express “will” in terms of “freedom”. A meaningful sense of “will” should perhaps be orthogonal to the sense of “freedom”.

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3328
Joined  26-01-2010
 
 
 
16 December 2018 01:28
 
prismo - 15 December 2018 08:46 AM

...
Our modern understanding of our mind does seem to indicate that we are a kluge of evolved processes (that can have different and even contradictory perspectives on a situation) but by an unelucidated mechanism they are unified into a seemingly resolved stream.  Is there a theory that can explain it.
....

I presume the last sentence quoted was a question.

There are quite a few theories that can explain it.

Getting back to illusion: If we consider that all the information coming from the world external to our bodies is turned into other forms of information and largely reduced in volume before reaching our brains, it is obvious that our brains are working with representations of those impressions rather than the impressions themselves, or for that matter the objects that originated the information that created those impressions. This is what is meant by illusion in this context. Of course, illusion can be used in the more negative sense of misrepresentation. Some degree of misrepresentation is possible just as an unavoidable side-effect of information transformation and reduction, but misrepresentation cannot be so severe most of the time that our senses and intellect would not be worth the expenditure of energy needed to keep them.

Complicating the matter of illusions, or perhaps delusions, is that it is sometimes survival-enhancing to act as if something is right when it is wrong, because the alternative would risk physical harm from those truly deluded.

As for putting all of it into a resolved stream, scientists working on it haven’t yet figured that out, but are working hard at it.

 
 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
16 December 2018 06:15
 

I think that is close enough to a good definition of illusion. I’m curious what a theory for unification would look like though. A physical unification of information to single point seems like a possibility but also seems unsatisfying and I assume would be found already if it did exist if it was so central. It also seems outside the scope of function of a neuron. It would pretty much become the neuron that is always activated (so would it actually be producing information?) though would perhaps translate to the abstraction of perceived reality that is always constant.

Also, pretty much every theory I can think of involves symbolic representation and is reductionist in nature but that unified consciousness seems non-symbolic. It’s possible that the neural computation is just not linked to our language center, and that may be part of where an “illusion” of unification comes in because if we don’t have the ability to name something/some-concepts our mind may be forced to treat it as unified. Naming after all, by the very act, involves carving out of information.

The latter framework for what a theory would look like makes more sense to me than an actual physical unification, but alas would add yet another illusion to a theory of consciousness.

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3328
Joined  26-01-2010
 
 
 
18 December 2018 22:45
 

I don’t think anyone has definite answers to any of the questions you posed, unless they believe their own BS.

I have notions of what is necessary for information to exist, and it doesn’t involve a point source. Information actually requires spatiotemporal extension and a significant level of causality. By the latter I mean some minimum number of states, along with transition-causing events that are repeatable across a large percentage of instances or trials.

 
 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
19 December 2018 07:14
 

Yea, I’m just curious about potential answers rather than any sort of definitive ones. I can’t unpack your stated theory because I don’t understand how you’re using your terms (information, spatiotemporal extension, significant level of causality). I haven’t thought of information as an abstraction as requiring any basis in physical space-time, although it would seem that the conception of any information would require it. Maybe you are referring to a more specific type of information and I’m thinking of information in a broader sense. To me information is an almost too inclusive term (the way I use it in my mind). I find it helpful to try to define the antonym of terms too to find the distinction that a term is trying to make. For me, the antonym of information (“non-information”), I would understand to be randomness. This to me would seem to prioritize the dimension of order, but I’m not sure if order has any objective basis.

If I understand how you’re defining “significant level of causality” you’re using it like something of a statistical mathematical function that converges with enough examples. This would have the benefit of essentially taking out the necessity for a variable like time that seems to constantly marches forward (which some would say is itself an illusion).

Furthermore, I’m not sure if the theory you’re stating about information is in regards to reality broadly, the production of consciousness, the nature of “free will”, or none of these.

The best way I find to explore these kind of theories are following them to some kind of non-trivial / controversial implication via thought-experiment, and then judging how well the implication jives (or doesn’t) with your intuitive model of reality.

I appreciate you adding to the discussion.

[ Edited: 19 December 2018 07:28 by prismo]
 
EN
 
Avatar
 
 
EN
Total Posts:  21119
Joined  11-03-2007
 
 
 
19 December 2018 14:09
 

In order for there to be information there must be someone or something to “inform”.  Otherwise, it’s just a bare fact. It becomes information when it informs someone or something else to think or act in a particular manner.

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6620
Joined  08-12-2006
 
 
 
20 December 2018 09:46
 

Consider the possibility that consciousness is not an adaptive trait, but a learned skill. The brain wasn’t “designed” (using that term as a loose descriptor of the process of Darwinian evolution) for consciousness. It was designed to be a supremely adaptive mechanism of non-aware stimulus-response. At some point, it became capable of consciousness, defined as the process by which a model of reality is constructed in the mind and the model of self that inhabits it. Human awareness is limited to the model constructed by the process of consciousness.

Our brains haven’t changed for a hundred thousand years, yet human behavior can be explained without resorting to consciousness up until fairly recently. So it’s possible that humans existed for ninety thousand years or more with brains capable of consciousness, but without learning to construct a model of reality, at least not on a wide scale. From that standpoint, “consciousness” can be seen as a kind of mental disorder, at least back when most humans didn’t experience it. It would have been abnormal.

That would explain all the flaws of consciousness. That’s not to say that consciousness doesn’t have its advantages, obviously, at least in the modern world. But it certainly has its flaws.

 
 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1576
Joined  31-10-2015
 
 
 
20 December 2018 12:11
 
Antisocialdarwinist - 20 December 2018 09:46 AM

Consider the possibility that consciousness is not an adaptive trait, but a learned skill. The brain wasn’t “designed” (using that term as a loose descriptor of the process of Darwinian evolution) for consciousness. It was designed to be a supremely adaptive mechanism of non-aware stimulus-response. At some point, it became capable of consciousness, defined as the process by which a model of reality is constructed in the mind and the model of self that inhabits it. Human awareness is limited to the model constructed by the process of consciousness.

Our brains haven’t changed for a hundred thousand years, yet human behavior can be explained without resorting to consciousness up until fairly recently. So it’s possible that humans existed for ninety thousand years or more with brains capable of consciousness, but without learning to construct a model of reality, at least not on a wide scale. From that standpoint, “consciousness” can be seen as a kind of mental disorder, at least back when most humans didn’t experience it. It would have been abnormal.

That would explain all the flaws of consciousness. That’s not to say that consciousness doesn’t have its advantages, obviously, at least in the modern world. But it certainly has its flaws.

Yes. At this point in our cultural evolution (which is perhaps comparable to software), even minimal success for individuals hinges on, among other things, an ability to imagine what’s not literally in front of us. (Or behind us, or to the side, etc.!)

It seems indisputable that our recent prehistoric ancestors lacked the formidable ability to imagine much beyond what they were able to see, feel, taste/smell, or hear. Eighty-thousand years ago, a habitual hallucinator typically would perhaps not have survived into mating age. Nowadays, it’s actually necessary for a person to be able to execute what could be referred to as hallucination, but such “hallucination” needs to be entirely controlled, which makes it not at all a hallucination in a dictionary-approved way. Yet it’s an almost universal human trait and survival skill. Many of the primary cognitive ingredients of imagination are, it seems to me, what 100,000 years ago might have identified a person as a raving lunatic. Smart enough motherfuckers (in a literal sense) were of course fully able to find a way to procreate. Thank Goddard.

 
 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
20 December 2018 18:15
 

Consciousness as an emergent phenomenon seems intuitive when defined in the modeling sense. If this is where we leave it, conscious machines are likely around the bend.  The consciousness defined in terms of “it feels like something to be X” may not be as clearly attributed to these machines, and maybe that’s because that kind of qualia based definition may not be as easily formalized. A lot of modern philosophers have posited panpsychism based theories where essentially consciousness is a spectrum that tracks down to everything. I’m not sure how I feel about this.

Abstract reasoning leading to imagination of patterns that don’t already exist, also does seem well within the purview of our current scientific understanding. Again, I think we’re probably not all that far away from robust symbolic computation via machines that can do this kind of thing. It boggles my mind how we can treat abstractions as first class citizens in our minds amongst more grounded concepts. The first thing that comes to mind in this regard is how easily we conceptualize infinity by essentially negating the concept of finiteness.

The harder concept of consciousness as subjective experience / qualia / “being like something to be X” still seems elusive. Specifically, I’m coming at all these concepts imagining that AI progresses to AGI. What would the ethical considerations towards these machines be. Since they didn’t evolve for self-preservation I might not feel so bad about turning one off, but would I consider it capable of suffering at all?

 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1576
Joined  31-10-2015
 
 
 
21 December 2018 07:19
 
prismo - 20 December 2018 06:15 PM

Consciousness as an emergent phenomenon seems intuitive when defined in the modeling sense. If this is where we leave it, conscious machines are likely around the bend.  The consciousness defined in terms of “it feels like something to be X” may not be as clearly attributed to these machines, and maybe that’s because that kind of qualia based definition may not be as easily formalized. A lot of modern philosophers have posited panpsychism based theories where essentially consciousness is a spectrum that tracks down to everything. I’m not sure how I feel about this.

Abstract reasoning leading to imagination of patterns that don’t already exist, also does seem well within the purview of our current scientific understanding. Again, I think we’re probably not all that far away from robust symbolic computation via machines that can do this kind of thing. It boggles my mind how we can treat abstractions as first class citizens in our minds amongst more grounded concepts. The first thing that comes to mind in this regard is how easily we conceptualize infinity by essentially negating the concept of finiteness.

The harder concept of consciousness as subjective experience / qualia / “being like something to be X” still seems elusive. Specifically, I’m coming at all these concepts imagining that AI progresses to AGI. What would the ethical considerations towards these machines be. Since they didn’t evolve for self-preservation I might not feel so bad about turning one off, but would I consider it capable of suffering at all?

When you refer to consciousness as an emergent phenomenon, do you include animal consciousness as well as human consciousness?

 
 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
21 December 2018 18:37
 

Yea, I realize consciousness is a pretty nebulous term and at different times we mean different things in terms of consciousness (awake, aware, self-aware, etc). The one I find perplexing from a philosophical standpoint is the “feels like something to be X” made popular by Thomas Nagel, that would include both human and animal varieties. Subjective experience is probably a more precise term. My gut feeling is that I would expect it to be emergent, but I don’t have a compelling mechanism (or even plausible theory). My reductionistic conditioning would lead me to believe that if you pulled out and discarded my neurons one by one, while I was alive, eventually I would cease to have any experience. I’ve never been inclined to believe in an afterlife for consciousness.

Part of it seems to be that I can really only conceptualize my own consciousness and my idea of your consciousness (or that of an animal) is based on symmetries I perceive with respect to my own experience. Once we de-escalate down from mammals I lose common ground for conceptualization, and by the time I get to plants, then single-celled organisms, and then viruses I’m not sure if we’re dealing with a “zombie” machine or something with even a proto-consciousness (in the way I’m using the term above). It seems absurd to me to even consider something like a rock to have even proto-consciousness since there’s no traditional stimulus-reaction relationship.

Consciousness (subjective experience) seems to have several properties that I can conceive it to be without. Though it would seem bizarre.
- It’s private, but if a mad scientist could see my every thought, I expect I would still be experiencing them. How about if they could see my thoughts before I was aware of it. I’ve heard in at least a limited scope this has been shown possible. I’d certainly have a vastly different idea of my will and its “freedom” but I think I would still have a subjective experience.
- It’s continuous (by way of memory). This does seem to falter it, but I can’t help but feel that even if I had only enough memory for the moment I was in, then I would still have a subjective experience.
- It’s singular. This may be an even more compelling property. I don’t feel like my consciousness is a bunch of mini processes feeding together (but it could be). Is a split personality or schizophrenic patient experiencing multiple consciousnesses simultaneously, or are they switching between consciousnesses, or maybe its one consciousness shifting or limited in its scope.

I don’t expect anyone to have a perfect explanation and I certainly don’t think I have one. But I figure other people have thought about this kind of stuff, or heard some interesting theories, etc.

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6620
Joined  08-12-2006
 
 
 
21 December 2018 21:20
 

The hardest problem of consciousness seems to be reaching agreement on a definition. Nevertheless, I think most people have an intuitive sense of what they consider “conscious” vs. “subconscious” human behavior. For example, recognition is subconscious; recall is conscious. When you recognize something, it happens automatically. Recall requires the model of reality and the model of self. Thinking in terms of metaphors requires consciousness. Hitting a Ping-Pong ball is best done subconsciously, as is touch typing. Pulling your hand off a hot stove before you’re aware that the stove is hot, or even that you pulled your hand away, is subconscious. Etc.

Awareness and consciousness are almost synonymous. But—according to my favorite definition—consciousness is the process by which the model of reality and the model of self are constructed, whereas awareness is more of a state, and is limited to the model. So awareness implies consciousness and vice versa, but they’re not quite synonymous.

From that standpoint, if an animal is not conscious, it lacks the capacity to construct the model of reality and self and is therefore not aware of anything—awareness being limited to the model. In that case, it’s just responding to stimuli, almost like a computer responds to the stimulus of a key-press by displaying the corresponding letter on screen. So there is nothing it is like to be a non-conscious animal. It would be like living your entire life in that moment after you put your hand on a hot stove, before you’re aware the stove is hot and before you’re aware you pulled your hand away. Stimulus-response, without awareness of either.

 
 
prismo
 
Avatar
 
 
prismo
Total Posts:  8
Joined  25-11-2018
 
 
 
22 December 2018 18:06
 
Antisocialdarwinist - 21 December 2018 09:20 PM

consciousness is the process by which the model of reality and the model of self are constructed, whereas awareness is more of a state, and is limited to the model.

Thanks ASD! This is the type of thing I was looking for and is really helping me further my thoughts on this subject. I like your definition from an operational standpoint and it makes grading consciousness more straightforward. Even though you used a behavior based example I think it somehow makes intuitive sense for subjective experience examples (as well as probably can be done anyways). If you have other thinkers or works along the line you’ve described I’d be interesting in reading more.

I can conceive of ordering organism consciousnesses based on their capacity to dynamically / robustly / accurately model succeedingly complex environments. It would seem like as long as the model is cohesive it will naturally have a single point of reference (presumably along a variety of dimensions) which it would use to build a model around. This point even if not explicitly conceptualized as “self” would be implicit and would allow for things like human babies to have consciousness (which to me seems valid) even without necessarily having an explicit abstraction in their model for their “self”. Only a model that is in someways non-cohesive could have multiple points of reference, which would help explain the property of consciousness being “singular” as I was searching for above. It also provides a mechanism by which we don’t need a crude physical aggregation of neuronal impulses to a single physical point (since the “self point” is virtualized).

I also really like your focus on the process rather than the endpoint alone. I think this additional detail is important even though I’m not sure exactly why. It helps to explain the other property of consciousness I was searching for: the continuity aspect. It favors organic production of a model rather than just being given a model which I think helps to combat sentiment from Searle’s Chinese Room Argument.

I’m going to try to advance using some form of your definition and test ethical problems to see if they match my intuitions. Your definition seems to me to leave plenty of space for conscious machines (via AGI). How we treat these machines in different situations may constitute ethical questions. The obvious ones to me involve considering if actions could lead to these machines suffering. Perhaps consciousness and suffering are not mutually required if a system with consciousness is not based on self-preservation, and ethical dilemmas simply don’t exist (?!). This is certainly more speculative but at least there is some basis to try to think through these things. Thanks again ASD!

[ Edited: 22 December 2018 18:09 by prismo]
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6620
Joined  08-12-2006
 
 
 
23 December 2018 11:35
 
prismo - 22 December 2018 06:06 PM
Antisocialdarwinist - 21 December 2018 09:20 PM

consciousness is the process by which the model of reality and the model of self are constructed, whereas awareness is more of a state, and is limited to the model.

Thanks ASD!

You’re welcome. I’m glad you found it interesting. I recommend The Origin of Consciousness in the Breakdown of the Bicameral Mind. Jaynes’s bicameral mind theory is a little far fetched, maybe, but still intriguing. The best part of the book in my opinion is where he lays out exactly what he means by consciousness. He spends almost the first third of the book on that. I’d be interested to hear what you think if you decide to read it.

 
 
 1 2 3 >