< 1 2 3 4 5 >  Last ›
 
   
 

Objective Morality: A Back-of-the-Napkin Sketch

 
burt
 
Avatar
 
 
burt
Total Posts:  15212
Joined  17-12-2006
 
 
 
27 October 2017 16:13
 

A couple of Bronowski quotes that seem relevant here:

“You cannot know what is true unless you behave in certain ways.”

Thus, there is an ethical injunction that underlies success in science. “We OUGHT to act in such a way that what IS true can be verified to be so.”

“There are two things that make up morality. One is the sense that other people matter: the sense of common loyalty, of charity and tenderness, the sense of human love.  The other is a clear judgment of what is at stake: a cold knowledge, without a trace of deception, of precisely what will happen to oneself and to others if one plays either the hero or the coward.”

 

 
GAD
 
Avatar
 
 
GAD
Total Posts:  17001
Joined  15-02-2008
 
 
 
27 October 2017 17:27
 
burt - 27 October 2017 04:13 PM

“There are two things that make up morality. One is the sense that other people matter: the sense of common loyalty, of charity and tenderness, the sense of human love.  The other is a clear judgment of what is at stake: a cold knowledge, without a trace of deception, of precisely what will happen to oneself and to others if one plays either the hero or the coward.”

All subjective and relative to ones own definitions, feelings, views, etc.

 
 
burt
 
Avatar
 
 
burt
Total Posts:  15212
Joined  17-12-2006
 
 
 
27 October 2017 18:41
 
GAD - 27 October 2017 05:27 PM
burt - 27 October 2017 04:13 PM

“There are two things that make up morality. One is the sense that other people matter: the sense of common loyalty, of charity and tenderness, the sense of human love.  The other is a clear judgment of what is at stake: a cold knowledge, without a trace of deception, of precisely what will happen to oneself and to others if one plays either the hero or the coward.”

All subjective and relative to ones own definitions, feelings, views, etc.

That’s why I claim that the only objective moral injunction is for a person to develop their level of awareness and consciousness to the point that the subjectivity goes away. Or, the only universal moral injunction is to seek enlightenment (or, the Apollonian “Know Thyself”).

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6472
Joined  08-12-2006
 
 
 
27 October 2017 19:57
 
Ain Sophistry - 26 October 2017 09:35 PM
Antisocialdarwinist - 25 October 2017 07:32 PM

You claim that “The only value satisfaction conditions that qualify for moral status are those that are so general they apply to all values.” How can you claim that “the general promotion of liberty” applies to the value, “Whites are the master race?”

It sounds to me like what you’re claiming is analogous to the claim that all swans are white, and now I’m showing you a black swan. What am I missing?

1. “Whites are the master race” is a belief, not a value.

2. I assume you mean something like a desire that whites dominate other races. This meets some criteria for valuehood, but not others (white supremacy isn’t an end in itself; those who want it want it because they ultimately want other things). At any rate, it wouldn’t be at the top of any desire/value hierarchy, and so it would not be the source of the strongest reasons for action to which one is beholden.

3. I addressed the broader question of why I shouldn’t only care about my (or my tribe’s) value satisfaction conditions in the OP (in fact, the answers to many of the questions you’ve raised so far could have been gleaned from the OP). The relevant bit was:

But doesn’t this only suggest that I should work toward my own liberty, my own healthfulness, and my own knowledge, and likewise for everyone else? Aren’t we faced with essentially the same problem that frustrated the value universalist?

I don’t think so. I rely constantly on knowledge acquired by others (scientists, philosophers, mechanics, journalists, friends), on liberty protected by others (judges, lawyers, police officers, good Samaritans, even—ugh—legislators), and on healthcare and health-related goods and info provided by others (doctors, pharmacists, nutritionists, food providers, employers, family members). These folks could not provide these resources without sufficient knowledge, liberty, and health of their own, and they depend for their share of these resources on the activities of others, and so on, and so on. I thus have a stake in their access to these goods as well as mine—and likewise for each of them.

The convergence point, if you will, for all these overlapping and complementary reasons would, I think, be something like the following:

All should work toward the realization and maintenance of a stable society that affords and assures optimally equal access to liberty, health, and knowledge.

There are more arguments to make here, but they take us out of the realm of a Theory of Good and into the realm of a Theory of Right. I’m working on the latter, but that’s probably going to be a separate post.

4. You may find helpful my suggestion to burt to think about this within a Rawlsian framework:

Let every agent ask “What should I want, knowing nothing else about what I want?” The rational answer, I think, would have to be a society that optimizes the three aforementioned goods (liberty, healthfulness, and knowledge), for such a society will give me the best chance at satisfying my values, whatever they may be. So it will be for everyone else as well, and that is, therefore, the society we all have reason to work toward and maintain.

Maybe you can give an example of something you consider to be a “value.” Then maybe I can rephrase my example to clear your semantic hurdle.

Keep in mind that I’m not questioning the merits of your moral philosophy. I’m only questioning whether your sketch leads to objective morality.

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  17001
Joined  15-02-2008
 
 
 
27 October 2017 21:46
 
burt - 27 October 2017 06:41 PM
GAD - 27 October 2017 05:27 PM
burt - 27 October 2017 04:13 PM

“There are two things that make up morality. One is the sense that other people matter: the sense of common loyalty, of charity and tenderness, the sense of human love.  The other is a clear judgment of what is at stake: a cold knowledge, without a trace of deception, of precisely what will happen to oneself and to others if one plays either the hero or the coward.”

All subjective and relative to ones own definitions, feelings, views, etc.

That’s why I claim that the only objective moral injunction is for a person to develop their level of awareness and consciousness to the point that the subjectivity goes away. Or, the only universal moral injunction is to seek enlightenment (or, the Apollonian “Know Thyself”).

I don’t see where that changes anything, and it’s really the same as inventing god and believing that he is enlightenment.

 
 
burt
 
Avatar
 
 
burt
Total Posts:  15212
Joined  17-12-2006
 
 
 
28 October 2017 08:39
 
GAD - 27 October 2017 09:46 PM
burt - 27 October 2017 06:41 PM
GAD - 27 October 2017 05:27 PM
burt - 27 October 2017 04:13 PM

“There are two things that make up morality. One is the sense that other people matter: the sense of common loyalty, of charity and tenderness, the sense of human love.  The other is a clear judgment of what is at stake: a cold knowledge, without a trace of deception, of precisely what will happen to oneself and to others if one plays either the hero or the coward.”

All subjective and relative to ones own definitions, feelings, views, etc.

That’s why I claim that the only objective moral injunction is for a person to develop their level of awareness and consciousness to the point that the subjectivity goes away. Or, the only universal moral injunction is to seek enlightenment (or, the Apollonian “Know Thyself”).

I don’t see where that changes anything, and it’s really the same as inventing god and believing that he is enlightenment.

Who tastes, knows. Everybody else argues over the recipe.

 
Ain Sophistry
 
Avatar
 
 
Ain Sophistry
Total Posts:  127
Joined  26-01-2010
 
 
 
29 October 2017 21:31
 
Antisocialdarwinist - 27 October 2017 07:57 PM

Maybe you can give an example of something you consider to be a “value.” Then maybe I can rephrase my example to clear your semantic hurdle.

Keep in mind that I’m not questioning the merits of your moral philosophy. I’m only questioning whether your sketch leads to objective morality.

Examples: Happiness, social connection, reciprocated love, the wellbeing of one’s family, meaning (however one conceives of it). One might, of course, value liberty, healthfulness, and knowledge explicitly as well, but doesn’t have to in order to have reasons to realize them.

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6472
Joined  08-12-2006
 
 
 
31 October 2017 20:55
 
Ain Sophistry - 29 October 2017 09:31 PM
Antisocialdarwinist - 27 October 2017 07:57 PM

Maybe you can give an example of something you consider to be a “value.” Then maybe I can rephrase my example to clear your semantic hurdle.

Keep in mind that I’m not questioning the merits of your moral philosophy. I’m only questioning whether your sketch leads to objective morality.

Examples: Happiness, social connection, reciprocated love, the wellbeing of one’s family, meaning (however one conceives of it). One might, of course, value liberty, healthfulness, and knowledge explicitly as well, but doesn’t have to in order to have reasons to realize them.

People also value things like power and material possessions. Any reason they wouldn’t be considered values?

 
 
Ain Sophistry
 
Avatar
 
 
Ain Sophistry
Total Posts:  127
Joined  26-01-2010
 
 
 
02 November 2017 14:59
 
Antisocialdarwinist - 31 October 2017 08:55 PM

People also value things like power and material possessions. Any reason they wouldn’t be considered values?

Do they value them for their own sake or do they value them because they value other things (like happiness)? If the latter, then they wouldn’t be values at the apex of one’s value hierarchy.

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6472
Joined  08-12-2006
 
 
 
03 November 2017 07:38
 
Ain Sophistry - 02 November 2017 02:59 PM
Antisocialdarwinist - 31 October 2017 08:55 PM

People also value things like power and material possessions. Any reason they wouldn’t be considered values?

Do they value them for their own sake or do they value them because they value other things (like happiness)? If the latter, then they wouldn’t be values at the apex of one’s value hierarchy.

Let me try and be more clear about what I’m asking. You said:

Ain Sophistry - 24 October 2017 08:12 PM

The only value satisfaction conditions that qualify for moral status are those that are so general they apply to all values.

What I want to know is whether things like power and material possessions fall into the set of “all values,” however you’re defining “all values.”

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  271
Joined  26-10-2016
 
 
 
03 November 2017 14:19
 
Ain Sophistry - 26 October 2017 09:35 PM

3. I addressed the broader question of why I shouldn’t only care about my (or my tribe’s) value satisfaction conditions in the OP (in fact, the answers to many of the questions you’ve raised so far could have been gleaned from the OP). The relevant bit was:

But doesn’t this only suggest that I should work toward my own liberty, my own healthfulness, and my own knowledge, and likewise for everyone else? Aren’t we faced with essentially the same problem that frustrated the value universalist?

I don’t think so. I rely constantly on knowledge acquired by others (scientists, philosophers, mechanics, journalists, friends), on liberty protected by others (judges, lawyers, police officers, good Samaritans, even—ugh—legislators), and on healthcare and health-related goods and info provided by others (doctors, pharmacists, nutritionists, food providers, employers, family members). These folks could not provide these resources without sufficient knowledge, liberty, and health of their own, and they depend for their share of these resources on the activities of others, and so on, and so on. I thus have a stake in their access to these goods as well as mine—and likewise for each of them.

The convergence point, if you will, for all these overlapping and complementary reasons would, I think, be something like the following:

All should work toward the realization and maintenance of a stable society that affords and assures optimally equal access to liberty, health, and knowledge.

I did pick this comment up in your OP,  it stood out like a red flag.

It stood out like a red flag to me because since Sam’s last AMA, where he restated the ‘simple’ premise for his ML argument (‘all that I need to get going is for you accept (1) there is a worst possible world where all conscious beings are suffering in the extreme and (2) there are possible states of the world that are somewhat better’), I’ve been thinking how much has been smuggled into that starting point.

Namely, even though we can talk about a landscape (peaks and troughs of wellbeing) of a conscious sentient individual from the perspective of that individual themselves (literally meaning how they would rank states of the world when asked - such an individual could be entirely selfish, entirely selfless or a combination in how they answer), to do this collectively and aggregate across individuals implicitly requires the utility function of a third person who can take the utility functions of others and put them into a common metric, so they can be averaged (maybe equally weighted, maybe not - SH would weight David Deutsch’s experience more than his children’s I believe he has indicated). In short, it presupposes a framework where the experience of others is taken into consideration collectively and aggregated - which of course most humans do, more or less, by virtue of what we are as genetic and memetic machines.

You’ve already distinguished yourself from SH’s TML view in this thread, and rightly so because you’re aware of this point I believe. Your remarks above probably indicate that you are trying to answer: is there some fact about the objective (non human) world that would mean evolution would inevitably result in conscious beings developing this framework.

Your answer seems to involve the assumption that every sentient being’s own utility function will be maximized through some form of collaboration with others. While I think this probably true in large class of examples (eg as long as an individual’s utility function includes caring for their descendants), there are obviously pathological examples where this is not true. Putting aside the pathological examples though, it isn’t clear from an evolutionary perspective (genetic or memetic) what the optimal horizon for collaboration with others should be. You could argue in the limit, if I care about the reproduction of all possible descendants, I should cast my net incredibly wide. But the balancing force against that is the differential survival of the next few generation of genes or memes.

But here we are no longer talking about ‘shoulds’ in an objective sense. Rather we are looking for objective reasons why some subjective ‘shoulds’ could arise.

The question of what individual humans feel they should do today depends on their specific circumstances. A privileged person who has the time to participate in a forum like this will feel different things from someone in a war torn village in Rwanda.

 
Ain Sophistry
 
Avatar
 
 
Ain Sophistry
Total Posts:  127
Joined  26-01-2010
 
 
 
05 November 2017 12:46
 
Antisocialdarwinist - 03 November 2017 07:38 AM

What I want to know is whether things like power and material possessions fall into the set of “all values,” however you’re defining “all values.”

My hunch would be no, but I suppose it’s in theory an empirical question. These both seem like things one wants only because they enable the realization of deeper values. No one falls out of the womb craving power or a new Bentley, but we do fall out of the womb aversive to pain, appetitive toward pleasure, and primed to seek meaningful social relationships.

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6472
Joined  08-12-2006
 
 
 
07 November 2017 09:59
 
Ain Sophistry - 05 November 2017 12:46 PM
Antisocialdarwinist - 03 November 2017 07:38 AM

What I want to know is whether things like power and material possessions fall into the set of “all values,” however you’re defining “all values.”

My hunch would be no, but I suppose it’s in theory an empirical question. These both seem like things one wants only because they enable the realization of deeper values. No one falls out of the womb craving power or a new Bentley, but we do fall out of the womb aversive to pain, appetitive toward pleasure, and primed to seek meaningful social relationships.

“Men do not become tyrants in order to keep out the cold,” eh?

Couldn’t you make the same argument about “the general promotion of liberty, healthfulness and knowledge?” No one falls out of the womb craving the general promotion of liberty, healthfulness or knowledge, either. In fact, it seems to me that the deepest value of all is pure self-interest. Should we limit the set of “all values” (values for which we consider general satisfaction conditions) to self-interest? What’s good for me is objectively good?

Suppose, hypothetically, that we consider power as belonging to your set of “all values.” I think a pretty strong case can be made that the general promotion of liberty runs counter to that value. (Just ask Little Rocketman.) Would that mean that “the general promotion of liberty” is not objectively good? Because the condition it leads to does not satisfy “all values?”

In general, I think values are different enough that no conditions will satisfy them all. So your sketch depends on limiting “all values” to those which can be satisfied by what amounts to your preferred “general satisfaction conditions.” Which begs the question: if your set of “all values” cannot be objectively defined, can the resulting determinations about “good” and “bad” still be called “objective?”

 
 
Ain Sophistry
 
Avatar
 
 
Ain Sophistry
Total Posts:  127
Joined  26-01-2010
 
 
 
07 November 2017 19:38
 
Giulio - 03 November 2017 02:19 PM

You’ve already distinguished yourself from SH’s TML view in this thread, and rightly so because you’re aware of this point I believe. Your remarks above probably indicate that you are trying to answer: is there some fact about the objective (non human) world that would mean evolution would inevitably result in conscious beings developing this framework.

You’re right in that I’m trying to avoid merely smuggling in a universalizing premise (fwiw, I think the tacit assumption that the wellbeing of the individual self ought to be the normative default can be questioned, but that’s a project beyond the scope of the present thread). I don’t, however, know where you’re getting the evolutionary argument from what I’ve written.

Giulio - 03 November 2017 02:19 PM

Your answer seems to involve the assumption that every sentient being’s own utility function will be maximized through some form of collaboration with others. While I think this probably true in large class of examples (eg as long as an individual’s utility function includes caring for their descendants), there are obviously pathological examples where this is not true. Putting aside the pathological examples though, it isn’t clear from an evolutionary perspective (genetic or memetic) what the optimal horizon for collaboration with others should be. You could argue in the limit, if I care about the reproduction of all possible descendants, I should cast my net incredibly wide. But the balancing force against that is the differential survival of the next few generation of genes or memes.

But here we are no longer talking about ‘shoulds’ in an objective sense. Rather we are looking for objective reasons why some subjective ‘shoulds’ could arise.

The question of what individual humans feel they should do today depends on their specific circumstances. A privileged person who has the time to participate in a forum like this will feel different things from someone in a war torn village in Rwanda.

Let me try to clarify my strategy here. I’m trying to see if we can bootstrap our way to an objective morality using only individual instrumental rationality. Now, it’s important that rationality not be conflated with omniscience here. An ideally rational individual doesn’t know everything, but takes his ignorance into account in his decision-making. Among the relevant things about which he lacks complete information (and thus should account for) are:

1. How his circumstances might change in the future, and thus what particular freedoms and knowledge he will end up needing to keep his core values optimally satisfied.

2. To what extent the freedoms and knowledge he needs for value satisfaction in any particular moment depend upon the activities of others.

3. Who those others are.

4. What particular freedoms and knowledge those others will need in order to successfully perform the activities that undergird his own desirable freedoms and knowledge.

5. On whose activities those others in turn depend for the freedoms and knowledge of concern to him, and on what freedoms and knowledge those activities in turn depend (and so on).

In light of these uncertainties and contingencies, it seems to me that an ideally rational agent would, irrespective of his particular values or circumstances, prefer a world in which general value satisfaction conditions are stably realized and as broadly accessible as practicable. Now, there may be tensions between stability and broad accessibility such that what’s needed is a sort of optimal middle ground (the desirable freedoms, for instance, probably ought not include the freedom to kill others with impunity, and there are likely good reasons to restrict knowledge of, e.g., how to make homemade dirty bombs). Whether or not we know what it is in all relevant detail, I think there is such a common rational optimum.

 
 
Ain Sophistry
 
Avatar
 
 
Ain Sophistry
Total Posts:  127
Joined  26-01-2010
 
 
 
07 November 2017 22:11
 
Antisocialdarwinist - 07 November 2017 09:59 AM

Couldn’t you make the same argument about “the general promotion of liberty, healthfulness and knowledge?” No one falls out of the womb craving the general promotion of liberty, healthfulness or knowledge, either. In fact, it seems to me that the deepest value of all is pure self-interest. Should we limit the set of “all values” (values for which we consider general satisfaction conditions) to self-interest? What’s good for me is objectively good?

I’m not claiming that “the general promotion of liberty, healthfulness and knowledge” is a value. My view doesn’t even depend on liberty, healthfulness, or knowledge being values individually (though they might be). This, once again, was articulated in the OP. These things are value satisfaction conditions. We have reason to work toward them even if we don’t value them explicitly.

And no, we shouldn’t restrict the set of all values to self-interest. We have innate capacities for both selfishness and (non-reciprocal) altruism.

Antisocialdarwinist - 07 November 2017 09:59 AM

Suppose, hypothetically, that we consider power as belonging to your set of “all values.” I think a pretty strong case can be made that the general promotion of liberty runs counter to that value. (Just ask Little Rocketman.) Would that mean that “the general promotion of liberty” is not objectively good? Because the condition it leads to does not satisfy “all values?”

I’ve already said that I don’t think “power” (as you seem to be conceiving of it) counts as a value. And I’m not ruling it out because it seems to be in tension with the general promotion of liberty; I’m ruling it out because I think the facts of human psychology militate against it occupying the apex of any real value hierarchy. Here’s a useful sniff test: Is it possible for one to desire power, to obtain the power thus desired, and yet be generally unsatisfied? If so, then power probably isn’t a fundamental value.

Antisocialdarwinist - 07 November 2017 09:59 AM

In general, I think values are different enough that no conditions will satisfy them all. So your sketch depends on limiting “all values” to those which can be satisfied by what amounts to your preferred “general satisfaction conditions.” Which begs the question: if your set of “all values” cannot be objectively defined, can the resulting determinations about “good” and “bad” still be called “objective?”

You’ve failed to produce a convincing counterexample and are now trying to suggest I’m stacking the deck. This is not the case. I’ve given several independent criteria for value determination. I’ve done this from the very beginning, in fact. I didn’t come up with this moral theory and then jury-rig a definition of value to make it “true.” The theory actually followed from my thinking and reading about values, desires, and instrumental rationality.

 
 
 < 1 2 3 4 5 >  Last ›