Limits of Moral Discourse…contextual values, intentions, justifications

 
RBlake
 
Avatar
 
 
RBlake
Total Posts:  4
Joined  17-10-2015
 
 
 
17 October 2015 06:25
 

The Harris-Chomsky dialogue (beyond suggesting a joke about whether a conversation between two Liberal, American, Jewish atheists could be interesting since they’d probably just agree on everything) points out a fundamental philosophical rift about morality and values. I didn’t find the volley disappointing at all, it mirrored some of the discussions I’ve had with people on those subjects. When one party embraces moral relativism, always considering the context of the other side while the other party, while acknowledging that his morals are not absolute, is still not afraid to use them to make value judgments…every sentence touches on the nerves of this basic disagreement. From Harris’ “Leftist Unreason…” about Iraq:

“If the situation had been reversed…”

Properly reversed? So, ‘leader of the free world’ Saddam of Iraq, head of the largest democracy, the world’s richest country, in the name of liberty, has decided to use the world’s largest army, along with allies, to remove dictator Bush, who had used chemical weapons on his own people, while the typical American was considered an innocent victim and liable not to fight hard anyway and the US had no allies…

“...what are the chances that the Iraqi Republican Guard, attempting to execute a regime change on the Potomac, would have taken the same degree of care to minimize civilian casualties?”

Almost certain! What else can reversing the situations mean? Relating to terror, war crimes and violent behavior of all kinds, one of the principles has always been that those in positions of power are subject to a stricter code of morals, while those on the ropes are desperate and have to use more extreme means, including more collateral damage (which really means killing non-combatants, deliberately or not). Justifying specific acts of violence by appealing to good intentions (for Chomsky, “intentions” are really justifications) is irrelevant. Whether a specific intention justifies an act of violence bears on the context and contingency of the specific actors. Maybe it helps to look at the high level intentions in political conflict like war and terrorism, because they are more universal: “We seek to fight and win a cultural/religious/moral/ideological conflict against an opponent who is in the wrong!” In that sense every leader engaged in conflict has the same intention, be he terrorist or leader of an army representing a democratic country. So again, intentions are not relevant. The case Harris sites, the courts, is different because that is under a system of law, a particular moral code is in effect. That’s why international law is so hazy and, even in our own system people frequently say “the law is an ass”.

In the dialogue proper there are many cases where this fundamentally different philosophical stance shows itself. Part of the explanation for whether you believe in objective moral values or not is cultural and might be partly due to a generation gap and even birth order. If I was to guess, Harris is the eldest or only child while Chomsky has older siblings. Eldests believe in absolute morality…younger sons don’t.

If anyone is still interested in “Limits…” I’d like to hear your thoughts. There’s lots more of interest in their exchange. I guffawed at Harris’ apparent disbelief that any American would engage in suicide bombing if the shoe was on the other foot!

[ Edited: 17 October 2015 06:31 by RBlake]
 
Nom de Plume
 
Avatar
 
 
Nom de Plume
Total Posts:  93
Joined  14-12-2015
 
 
 
14 December 2015 17:07
 

*

[ Edited: 09 January 2016 04:26 by Nom de Plume]
 
sojourner
 
Avatar
 
 
sojourner
Total Posts:  5970
Joined  09-11-2012
 
 
 
14 December 2015 21:08
 

Hmm. Well, intellectually I relate to Chomsky in much the same way that my infant niece relates to the Baby Einstein toys hanging from her play mat. Doesn’t exactly know what they’re about, doesn’t integrate them into her daily life narrative in any particular way, doesn’t get what they’re supposed to do or what you’re supposed to do with them, but finds them strangely stimulating compared to the usual, predictable environmental stimuli and is glad they are there to sort of gaze and bat at and make squawky noises at. I cannot imagine any life circumstances under which I would meet Chomsky, but if I did, I assume the feeling would be mutual (“Oh look, a strange little creature furiously making talk noises in my direction. Wonder what the hell it’s trying to say?”)


As for Harris’s position - recently read Joshua Greene on utilitarianism, and it kind of solidified for me this awareness of a worrisome pattern or trend in thinkers that I otherwise greatly admire. Harris has his “well being of conscious creatures”, Jerry Coyne has talked about how people get overly rigid and dogmatic about “rights” (might not be the wording he used, but the general idea) as absolutes vs. mutually agreed upon contracts, Paul Bloom has his “case against empathy” (in favor of, to my mind, essentially more utilitarian thinking), Peter Singer seems to be largely utilitarian, and Greene explicitly says that “rights” are really a free pass to bypass logic in making arguments and that we should value utilitarianism instead because it requires an appeal to logic.


For me, it was Greene that highlighted why I’m uncomfortable with this thinking most clearly, when he makes his case for utilitarianism. While the rest of his book is great, his positions on utilitarianism are just all over the place, to my mind. He says utilitarian nightmare scenarios wouldn’t exist because us humans are cooperative and just not wired that way; while elsewhere in the book he notes that things that very much occur in ‘nature’, like rape, are morally abhorrent to us. He waves away things like the Trolley Scenario as generally not applying to “the real world”, seemingly without considering that the ‘real’ world is ‘real’ in the way it is because it’s based on a certain set of axioms, and if you change those axioms to completely utilitarian ones, you would be dealing with a different ‘real’ world, based on those axioms. He says that things we consider morally horrific, like slavery, could never maximize well-being and so doesn’t give an argument for what we would do if this were the case - it’s simply taken on faith that it couldn’t be, despite the fact that our ancestors obviously came to very different conclusions.


He does make one interesting argument for utilitarianism, although it actually comes full circle back to human rights in a yin yang way - he says that essentially our measure for deciding things in utilitarianism would be something like the Golden Rule - i.e., what kind of rules we would want to set up in a hypothetical world where we had no way of knowing what role we were going to be assigned in said world, what lot we were going to pick or what our circumstances would be.)

I think there is in fact a yin yang dynamic between ‘rights based’ thinking and utilitarian thinking, and perhaps it does come full circle and unify with the Golden Rule. But I find this trend among some of my favorite thinkers of somewhat de-emphasizing rights in favor of utilitarian themes problematic. To me, that’s like sending a ship to sea with no course and no anchor. In some ways I think that Sam’s moral framework is more absolute, as you mention, RBlake, but in some ways I think it has the hallmarks of a sort of utilitarian arbitrariness - he kinda decided that intentions are the be-all end-all just - Because. I’m sure he could make a decent case for how intentions are going to maximize well-being overall but that’s the thing about those types of arguments - you could make a million different arguments for what’s going to maximize well-being or pleasure or happiness or the greatest good or whatever. It’s Hobbesian anarchy all over again, in a new form. It’s not that I disagree with Harris, it’s more that I see this narrative as incomplete in the absence of a more concrete, unchanging counterbalance - although I’m not sure, in 2015, what that counterbalance should be. We’re quickly getting past the place in time where ironclad absolutes resonate with people, religious or otherwise. So what fills that space? I wish I could have grokked what Chomsky’s counter argument was, but to my mind Chomsky was talking nothing but specifics and Harris was talking nothing but general frameworks so they inevitably talked past one another - if they had talked about that point, I would have found that really interesting.

 
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2617
Joined  19-01-2015
 
 
 
15 December 2015 01:57
 

A nice thing about the ‘Moral Landscape’ motif is that it emphasis the process and progress of utilitarian gain over any actual goal: we can tell on landscape if our last move has been ‘up’ (better) or ‘down’ (worse) in terms of optimizing well-being. Adding parameters to the landscape (like including the well-being of slaves, women, animals etc.) does not change the concept at all: it just adds dimensionality.
Though we might be aiming at the peaks on the landscape as the ultimate objective, even superficial consideration shows that, as we develop more ways to affect the world, the landscape changes and so the peaks will forever be out of reach: indoor plumbing and central heating would most probably have been a ‘peak’ in a 17th century royal household, but it isn’t anymore.

In contrast, most other descriptions of utilitarianism focus one ultimate ‘good’ or ‘bad’ scenarios, which is neither realist nor helpful.

BTW: as far as I ‘grok’ Chomsky his key argument is that ‘people are people’, i.e. that Saddam or the current rulers of Iran do not behave any differently than how western leaders would behave in their situation and have behaved under similar circumstances in the past. That we should not delude ourselves in thinking we have superior morals when all we have is superior wealth and lack of long-term memory.

 
 
sojourner
 
Avatar
 
 
sojourner
Total Posts:  5970
Joined  09-11-2012
 
 
 
15 December 2015 14:37
 
Ubik - 15 December 2015 01:57 AM

A nice thing about the ‘Moral Landscape’ motif is that it emphasis the process and progress of utilitarian gain over any actual goal: we can tell on landscape if our last move has been ‘up’ (better) or ‘down’ (worse) in terms of optimizing well-being. Adding parameters to the landscape (like including the well-being of slaves, women, animals etc.) does not change the concept at all: it just adds dimensionality.
Though we might be aiming at the peaks on the landscape as the ultimate objective, even superficial consideration shows that, as we develop more ways to affect the world, the landscape changes and so the peaks will forever be out of reach: indoor plumbing and central heating would most probably have been a ‘peak’ in a 17th century royal household, but it isn’t anymore.

In contrast, most other descriptions of utilitarianism focus one ultimate ‘good’ or ‘bad’ scenarios, which is neither realist nor helpful.


This tends to be the primary defense I see of utilitarianism. Kind of like “Oh, pshaw, that would never happen, c’mon.” But the problem with that is:


1. What do we base that confidence on? Certainly not history.

2. When you make that move you’re no longer appealing to utilitarian principles (more like you’re appealing to the inherent goodness of human nature) so it makes no sense to argue for utilitarianism with non-utilitarian arguments. If your argument is that humans are inherently good and wouldn’t do bad things, then you’re invoking another school of thought.

BTW: as far as I ‘grok’ Chomsky his key argument is that ‘people are people’, i.e. that Saddam or the current rulers of Iran do not behave any differently than how western leaders would behave in their situation and have behaved under similar circumstances in the past. That we should not delude ourselves in thinking we have superior morals when all we have is superior wealth and lack of long-term memory.


Perhaps. Honestly, while I like him a lot I find him insufficiently jolly for a sage-y type. Whatever he found, it doesn’t seem like it made him really happy, and I think that is concerning. “People are people” is one side of the coin, and it’s true, I think, but the equal, opposite and contradictory side is that goodness and transcendence really do exist. I couldn’t exactly tell you about the nature of those things but I’m confident that there are people who can, and those are the sage-y types I tend to tune into.

 
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2617
Joined  19-01-2015
 
 
 
16 December 2015 00:26
 

Being aware of these extreme scenarios is precisely why Harris is interested in trying to come up with measurable ways to decides if well-being is increasing or decreasing. It is not helpful to think about extremes of happiness, because it is very unlikely that they are as great as we imagine them to be: 72 virgins? Isn’t that going to be terribly stressful? Likewise, moments of extreme unhappiness will be recognized by those living in them: as soon as it is clear that there are alternatives, people will try to change the status quo. That might lead to a time of even less happiness, but at least it will provide a chance for improvement - that is what history shows.
The Moral Landscape is not about avoiding valleys of unhappiness - it’s about recognizing what they are, just like it’s about recognizing the splendid plateau we have achieved in the West so far, and show us where there is room for improvement and a threat of decay.

 
 
sojourner
 
Avatar
 
 
sojourner
Total Posts:  5970
Joined  09-11-2012
 
 
 
16 December 2015 07:17
 
Twissell - 16 December 2015 12:26 AM

Being aware of these extreme scenarios is precisely why Harris is interested in trying to come up with measurable ways to decides if well-being is increasing or decreasing. It is not helpful to think about extremes of happiness, because it is very unlikely that they are as great as we imagine them to be: 72 virgins? Isn’t that going to be terribly stressful? Likewise, moments of extreme unhappiness will be recognized by those living in them: as soon as it is clear that there are alternatives, people will try to change the status quo. That might lead to a time of even less happiness, but at least it will provide a chance for improvement - that is what history shows.
The Moral Landscape is not about avoiding valleys of unhappiness - it’s about recognizing what they are, just like it’s about recognizing the splendid plateau we have achieved in the West so far, and show us where there is room for improvement and a threat of decay.


Well see this is where I think it gets interesting. Harris frames the problem, axiomatically, in a rather similar way as you do above - greatest possibly misery vs. greatest possible happiness. So to me utilitarianism (and related philosophies, like WBCC) as an exclusive philosophy have a sort of coding problem, because they go all the way back to binary when the world as we experience it is not binary, it is… (Hmm. Pauses. Googles “binary vs. _____”) I dunno, decimal based? Hexadecimal? Whatever the opposite of binary is. At the level where 1’s and 0’s represent things that work on probabilities and conditionalities within complex representational programs, not absolutes.


That’s a bit abstract. In practical terms, I mean that, yes, obviously utopia would be better than hell. I think we can all agree on that. But in between those two extremes, utilitarian-based thinking is only partially informative, especially in areas like human rights. What if we found out that the proverbial “utilitarian monster” really did exist - an advanced space alien who exists to us in the way we exist in relation to ants. This creature’s capacity for experience and happiness would be so exponentially larger than ours that it would actually increase overall happiness in the universe for it, assuming it lost its planet or something, to wipe out the human race and use Earth for itself. Would that be moral? What if we found that enslaving 10% of the population to do work for the other 90% did, in terms of totals, increase human happiness , would slavery then be moral? (You can give arguments like “That would never happen because humans would be too upset by slavery”, but obviously we weren’t in the past and we all sleep at night knowing plenty of atrocities go on in the world today.) What if a particular man’s well-being would be so greatly increased by having sex that it would be greater than the suffering an unconscious woman would experience from being raped - is rape then the moral option?


Again, this is where the concept of “rights” tends to come in. Rights are our sort of ground rules, I guess, our rule of law or language rules or parameters or whatever you want to call that. But the thing about rights is that given their anchor-like nature, we have to appeal to something rather less movable in this case. Sometimes it’s been God, sometimes ‘self-evidence’, sometimes force, sometimes consensus - but one way or another I think we have to get mutual axioms on paper for the corresponding utilitarian thinking to work. (Unless, as I always tend to add here, you want to go a mystic route.) I tend to like the concept of “self-evident” when it comes to rights, but I think as a society this is something we constantly have to work at. The more we see other sentient beings, at a felt level, as mini-universes of experience and pleasure and pain who love and suffer like we do, the more (at least this seems logical to me) we automatically self regulate towards wanting to increase their happiness, which decreases the likelihood of the nightmare scenarios I mention above being an issue in the first place, which increases our ability to move towards more utilitarian style solutions without trampling human rights. I think Sam sort of touches on this idea in “Waking Up” but I don’t think he’s ever explicitly combined the two - I could be wrong, though.

 
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2617
Joined  19-01-2015
 
 
 
16 December 2015 08:47
 

Yes, it is interesting.
We can, of course, take as given that the Moral Landscape (ML) will never include all the parameters necessary to truly reflect general well-being, as new ways to be happy or miserable are created all the time. So the ML can only reflect the most dire dimensions of human well-being, as described, for example, in the Universal Human Rights or some schemes of well-being like in Buthan. The assumption would be that that would cover a large degree for (un)happiness, leaving the rest for the individual to fulfill - after all, a sense of self-determination is also critical for well-being. A right to a minimum level on these parameters (with the option for more) would be equivalent to universal rights.

But concerning the ‘utalitarian monster’ - I think we are much closer to something like this than we think: transhumanism is creating these super-beings as we speak, human-machine hybrids who can do and know more than unassisted humans. Such a being might need some form of internet almost as much as others need water or air - it would demand more happiness ‘bandwidth’. In the sense that these beings would be both our biological and intellectual children, we would be obliged to make way for them.

A lot of the problems of the ML (like the fact that not every human can have a mansion) can be dealt with when people get educated about what really promotes well-being, in opposition to what we desire: the hedonistic treadmill assures that we are never truly happy, always seek what we can not yet have. By refocusing on the how to live to truly be happy, most materialistic obstacles to the ML can be removed.

 
 
sojourner
 
Avatar
 
 
sojourner
Total Posts:  5970
Joined  09-11-2012
 
 
 
16 December 2015 10:45
 
Twissell - 16 December 2015 08:47 AM

Yes, it is interesting.
We can, of course, take as given that the Moral Landscape (ML) will never include all the parameters necessary to truly reflect general well-being, as new ways to be happy or miserable are created all the time. So the ML can only reflect the most dire dimensions of human well-being, as described, for example, in the Universal Human Rights or some schemes of well-being like in Buthan. The assumption would be that that would cover a large degree for (un)happiness, leaving the rest for the individual to fulfill - after all, a sense of self-determination is also critical for well-being. A right to a minimum level on these parameters (with the option for more) would be equivalent to universal rights.

But concerning the ‘utalitarian monster’ - I think we are much closer to something like this than we think: transhumanism is creating these super-beings as we speak, human-machine hybrids who can do and know more than unassisted humans. Such a being might need some form of internet almost as much as others need water or air - it would demand more happiness ‘bandwidth’. In the sense that these beings would be both our biological and intellectual children, we would be obliged to make way for them.


See, to me this another place where broad strokes theories like utilitarianism run into trouble. Where they says something specific, they tend to say things that are potentially absurd or horrific. When you factor out the absurd or horrific parts of utilitarianism with qualifiers “Well yes but… this probably wouldn’t… well if this happened then we should…” it doesn’t seem to say much of anything at all, other than in the most sweeping generalities, like “It’s good for people to be happy”. 

A lot of the problems of the ML (like the fact that not every human can have a mansion) can be dealt with when people get educated about what really promotes well-being, in opposition to what we desire: the hedonistic treadmill assures that we are never truly happy, always seek what we can not yet have. By refocusing on the how to live to truly be happy, most materialistic obstacles to the ML can be removed.


This is the only real rationalization I can see for a theory like this - that there is a strong naturalistic element to it just waiting to be discovered (one that we would consider ‘good’, presumably). I suppose people do this in other theories - i.e., those who say capitalism is the most moral because it allows for people’s natural charitable instincts to be developed. I don’t think Sam has ever specifically made that argument (again, though, not like I can remember every word he’s ever written or said, so maybe he has,) but it seems to me that he must, at the very least implicitly believe this in order to support such a philosophy.

 
 
sojourner
 
Avatar
 
 
sojourner
Total Posts:  5970
Joined  09-11-2012
 
 
 
16 December 2015 17:44
 

PS… Related to this thread, found a cool site for Jonathan Haidt’s and various other researchers where you can take a quiz on your Moral Foundations and compare them with the averages for conservatives and liberals. I was:

Higher than either group for valuing:
- Care (Way higher than either group - what kind of jerks were taking this test?! Where are all my bleeding heart liberals at?)
- Loyalty

Right between the two groups for:
- Authority, sanctity, ownership

Exactly the same as the average conservative for ‘autonomy’

Lower than either group for:
- Fairness (Not sure why - I do think people with special needs should get special care though, and, vice versa, those who do more work should be proportionately rewarded vs. an “absolute equality” system)
- Honesty (Outside of white lies I try to tell the truth, but honestly, I don’t get particularly bent out of shape if other people don’t. I was raised in a family where it was always “We’re going to visit so-and-so, so be sure you pretend that you don’t know about X, and if they say Y just smile and say…, so since I didn’t expect the truth from my nearest and dearest, I hardly expect it from strangers.)

Anyways, it was a fun quiz if anyone else wants to try it - you have to register, which was annoying, but you literally just put in an email, age, gender, and political preference for the registration.

 
 
Nom de Plume
 
Avatar
 
 
Nom de Plume
Total Posts:  93
Joined  14-12-2015
 
 
 
19 December 2015 06:53
 

*

[ Edited: 09 January 2016 04:25 by Nom de Plume]