‹ First  < 3 4 5
 
   
 

Entropy: objective basis for morality?

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3184
Joined  26-01-2010
 
 
 
10 January 2018 21:47
 
LadyJane - 17 December 2017 08:32 AM
Poldano - 16 December 2017 08:09 PM
Giulio - 16 December 2017 05:01 AM
Poldano - 15 December 2017 08:30 PM

So, to the monistic determinist folks, are subjective preferences materially determined? If so, isn’t it possible to discover what those material causes are? Aren’t those causes objectively real?

Having a mind that has a model of self vs others, and a conception of actions and choices, would seem to be a necessary (but not sufficient) condition. Attribution of right and wrong only happens in a particular type of mind.

You though don’t agree with this it seems, and I can’t tell whether that’s because you have an idea or perspective I don’t understand, or whether this is just a word game.

Consider this example of a computer simulation. There is an algorithm X (with no self-awareness) that can act as an agent in an artificial world;  X can modify its own code; the artificial world has ‘dangers’ some of which can destroy X; initially X has been configured with a variety of rules to respond to situations which sometimes come into conflict, and in those cases some process is used to vote on which action to take; initially self-preservation is one of the key rules. How X reprograms itself is a function of both its internal state and its environment. If in one simulation of this game, over time X reprograms itself so the self-preservation rule becomes less and less important, and eventually it gets to a point where it makes a choice to destroy itself, you would think that was objectively wrong, right? Ie according to your view, a programmer who created this artificial world/game where this kind of thing happens regularly would have to thought of as a moral monster - right?

Your example does not contain several features that I think are necessary for morality to emerge and continue to be relevant to an agent.

(1) There is no evident competition among motivations. Self-interest is all that is necessary. Humans have multiple motivations that partially compete, which may be roughly described as self-interest and species-interest (AKA genetic-interest). These are somewhat inaccurate terms, technically speaking, and thereby motivate a great deal of additional discussion.

(2) The agent in your example does not need the cooperation of other agents for any reason. Humans need the cooperation of other agents for several purposes.

(3) The agent in your example is effectively immortal. Humans are not. Self-perpetuation for humans eventually becomes pointless, and propagation—species interest—then becomes paramount in a utilitarian sense. Note that this is not just a temporal sequence, but a logical sequence given the genetic self-perpetuation motivation.

If perpetuating the human species was predicated on morality we’d be doing better by the children in the present.  They’d all be educated and well fed.  And it would make murdering a menopausal woman a misdemeanor and killing a woman of child bearing years a capital crime.  We are no longer living in a world of women and children first.  Also, it would be considered morally reprehensible to destroy the environment for future generations.  And the more we are governed by self interest the farther we stray from the utilitarian ideal.  I don’t see the temporal or logical sequencing in response to the hypothetical above.

I’m sorry for the delay in answering this. I was so focused on my long-standing debate with ASD that I overlooked your post.

I do not currently claim to know a good moral axiom; that seems to be what you are getting at, and if so I agree and accept the criticism. I look at the objections you raise as evidence that there is much more complexity to any plausible moral axiom than we can currently comprehend. In general, I believe that any such axiom would need to be stated as a set of simultaneous equations describing the tensions (metaphorically speaking) among the various values we can recognize as contributory to moral decisions. I believe that human biological and cultural evolution is involved in a stochastic search for such values; in other words, that human morality is actively evolving. Necessarily, a good part of the evolution is of the “blind watchmaker” variety, and not comprehensively understandable in its entirety to the vast majority (or perhaps even any) of the humans who are involved in it.

 
 
‹ First  < 3 4 5