< 1 2 3 4 >  Last ›
 
   
 

The Is/Ought Problem

 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1886
Joined  31-10-2015
 
 
 
12 December 2015 14:59
 
Gregoryhhh - 12 December 2015 02:10 PM

besides which, is there not a name of a fallacy that you just did?

I guess it would be the fallacy of the rhetorical, snide question!

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  17887
Joined  15-02-2008
 
 
 
12 December 2015 15:54
 

I wonder if people with still be arguing about this 1000’s of years from now. Will the world be divided into the Isist’s and Oughtist’s warring over who has the right meaning/interpretation.

 
 
franz kiekeben
 
Avatar
 
 
franz kiekeben
Total Posts:  121
Joined  06-12-2015
 
 
 
12 December 2015 17:39
 
icehorse - 12 December 2015 07:52 AM

I don’t think you’re accurately representing Harris at this point. He’s very clear that there can be many peaks on TML.

I’m aware that Harris claims there can be multiple peaks on the landscape, but I don’t see how that’s relevant here. Suppose (to make this simple) that there are two courses of action one can take in a given situation that lead to the greatest amount of well-being possible. Well, then on Harris’s utilitarian theory the right thing to do is either one of these two, and it cannot possibly matter which. You may as well flip a coin. All other courses of action, however, are less than optimal.

But the question under discussion here is whether one can (at least in principle) determine the (or even a) course of action that is objectively right to pursue when one knows all the facts. I’m saying there isn’t: I’m saying someone can rationally hold that some course of action other than the two highest on the moral landscape is the right one to pursue. And neither Harris nor anyone else has demonstrated that such a course of action would be objectively wrong.

 

[ Edited: 12 December 2015 17:42 by franz kiekeben]
 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3337
Joined  26-01-2010
 
 
 
12 December 2015 22:31
 

I spent an awful lot of time and thought arguing about this on Project Reason. I won’t say I have a definitive answer, but I do have some suggestions. The previous comments in this thread have also jostled my thinking a bit, and I shall have to consider them some more.

One suggestion for a proposition is that moral relativity cannot be avoided, but the basis for that relativity might be sufficiently widespread to become, effectively, a universal for a sufficiently large part of reality. My proposition for the basis (contingency) for any morality that might tentatively be considered as absolute is the preservation and continuation of the existence of reason.

How did I come by this notion? It occurred to me that some minimum level of morality (think of it as instinctual or hard-wired morality) was necessary in order for humans to coexist in large groups, which was in turn necessary for language to evolve, which was in turn necessary for subjectively-experienced notions to be communicated to other beings, which was a step in the process of culture based on transfer of information by way of symbols instead of by direct demonstration. The cognitive process thereby became decoupled from direct personal experience, and became more and more a process involving hypothesis and evidence.

Why is morality still necessary to support and continue reason, now that reason is well-established? It’s the use-it-or-lose-it rule of evolution. To simplify matters, I’ll restrict the argument to humans, because it gets very complicated if other species and intelligent aliens are introduced without dealing with humans first. Reason, as an evolved capability, is never entirely stable and set, but always subject to mutations and variations that tend to either weaken or strengthen it. If it weakens, and if morality was previously a necessary feature for it to have continued strengthening (as I contend the preponderance of evidence suggests), then morality is most likely to be a necessary feature fostering its future strengthening and preventing its continued weakening.

I should note that I don’t use “morality” in the sense of any particular existing morality, but in the sense of “some rules of behavior that a group of humans consider an acceptable inconvenience for the benefits of continued existence.” I think there are some general patterns in specific moral rules among all groups of humans, despite a great deal of variation, including some notable exclusions of some classes of individuals from some of the general patterns in some groups. Decoding that a bit, every group identifies some classes of individuals that are outside the “moral contract” of the group; typically these are out-group, outcast, or outlawed individuals. One of the current difficulties in morality (what is usually meant by “moral relativism”) is that the pre-existing specific moralities of different groups are not identical, so that there is no humanity-wide consensus on what is “fair” or “just”, or on what features of individuals make them outcast or outlaw, or on what is appropriate behavior toward members of out-groups. This deficiency can only be remedied by time and a great deal of both debate and attitude change. Such a process is very much hampered by groups who believe that their current moralities are absolute.

There is obvious Utilitarianism here. What’s different is that I claim that all existing moral codes were “Utilitarian” in method of origin, in that they came about as simple conveniences that became established as “right” by tradition, and perhaps became “absolute” by virtue of the acknowledged “absolute” power of one or more human individuals. Utilitarianism in the strict Bencham and Mill model involves use a specific method to establish rules that could be considered an application of Sociology.

I don’t deny the possibility of an absolute moral law, but I contend that we don’t yet know enough to be able to state it with any degree of either comprehensibility or certainty.

[ Edited: 12 December 2015 22:38 by Poldano]
 
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2895
Joined  19-01-2015
 
 
 
13 December 2015 01:08
 
EN - 12 December 2015 12:21 PM
Ubik - 12 December 2015 11:44 AM

In almost any situation, we know exactly the ‘ought’ from the ‘is’: it’s a learned or instinctive response:
when a car is rapidly approaching, you ‘ought’ to get out of the way

Maybe not if your goal is to commit suicide, or to save your child from being hit.

Ubik - 12 December 2015 11:44 AM

when you see something great, you ‘ought’ not to steal it;

Maybe not if your child is starving and the something great is food.

Ubik - 12 December 2015 11:44 AM

when someone is impolite to you, you ‘ought’ not beat him to a pulp.

Maybe not if you also know that the person just murdered 6 people and the police are trying to find him before he murders 6 more.

Your response to any set of “IS” factors can depend upon any number of things.  We can agree on the IS factors and still disagree on the OUGHT response.

you misunderstand me: we know what we OUGHT to do - whether we do it or not is a completely different question.

 
 
EN
 
Avatar
 
 
EN
Total Posts:  21905
Joined  11-03-2007
 
 
 
13 December 2015 07:12
 
Ubik - 13 December 2015 01:08 AM
EN - 12 December 2015 12:21 PM
Ubik - 12 December 2015 11:44 AM

In almost any situation, we know exactly the ‘ought’ from the ‘is’: it’s a learned or instinctive response:
when a car is rapidly approaching, you ‘ought’ to get out of the way

Maybe not if your goal is to commit suicide, or to save your child from being hit.

Ubik - 12 December 2015 11:44 AM

when you see something great, you ‘ought’ not to steal it;

Maybe not if your child is starving and the something great is food.

Ubik - 12 December 2015 11:44 AM

when someone is impolite to you, you ‘ought’ not beat him to a pulp.

Maybe not if you also know that the person just murdered 6 people and the police are trying to find him before he murders 6 more.

Your response to any set of “IS” factors can depend upon any number of things.  We can agree on the IS factors and still disagree on the OUGHT response.

you misunderstand me: we know what we OUGHT to do - whether we do it or not is a completely different question.

I understand you, but if my goal is to commit suicide, what ought I to do?

 
GAD
 
Avatar
 
 
GAD
Total Posts:  17887
Joined  15-02-2008
 
 
 
13 December 2015 10:03
 
EN - 13 December 2015 07:12 AM
Ubik - 13 December 2015 01:08 AM
EN - 12 December 2015 12:21 PM
Ubik - 12 December 2015 11:44 AM

In almost any situation, we know exactly the ‘ought’ from the ‘is’: it’s a learned or instinctive response:
when a car is rapidly approaching, you ‘ought’ to get out of the way

Maybe not if your goal is to commit suicide, or to save your child from being hit.

Ubik - 12 December 2015 11:44 AM

when you see something great, you ‘ought’ not to steal it;

Maybe not if your child is starving and the something great is food.

Ubik - 12 December 2015 11:44 AM

when someone is impolite to you, you ‘ought’ not beat him to a pulp.

Maybe not if you also know that the person just murdered 6 people and the police are trying to find him before he murders 6 more.

Your response to any set of “IS” factors can depend upon any number of things.  We can agree on the IS factors and still disagree on the OUGHT response.

you misunderstand me: we know what we OUGHT to do - whether we do it or not is a completely different question.

I understand you, but if my goal is to commit suicide, what ought I to do?

To be, or not to be, that is the question:
Whether ‘tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune,
Or to take Arms against a Sea of troubles,
And by opposing end them: to die, to sleep
No more; and by a sleep, to say we end
The Heart-ache, and the thousand Natural shocks
That Flesh is heir to? ‘Tis a consummation
Devoutly to be wished. To die, to sleep,
To sleep, perchance to Dream; aye, there’s the rub,
For in that sleep of death, what dreams may come,
When we have shuffled off this mortal coil,
Must give us pause. There’s the respect
That makes Calamity of so long life:
For who would bear the Whips and Scorns of time,
The Oppressor’s wrong, the proud man’s Contumely,
The pangs of despised Love, the Law’s delay,
The insolence of Office, and the Spurns
That patient merit of the unworthy takes,
When he himself might his Quietus make
With a bare Bodkin? Who would Fardels bear,
To grunt and sweat under a weary life,
But that the dread of something after death,
The undiscovered Country, from whose bourn
No Traveller returns, Puzzles the will,
And makes us rather bear those ills we have,
Than fly to others that we know not of.
Thus Conscience does make Cowards of us all,
And thus the Native hue of Resolution
Is sicklied o’er, with the pale cast of Thought,
And enterprises of great pitch and moment,
With this regard their Currents turn awry,
And lose the name of Action. Soft you now,
The fair Ophelia? Nymph, in thy Orisons
Be all my sins remembered.

 
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2895
Joined  19-01-2015
 
 
 
13 December 2015 10:04
 
EN - 13 December 2015 07:12 AM

I understand you, but if my goal is to commit suicide, what ought I to do?

go against your learned response. You know that as a rule you shouldn’t, but convinced yourself that this is a necessary exception - in any case, you know the ‘ought’.

 
 
Nom de Plume
 
Avatar
 
 
Nom de Plume
Total Posts:  93
Joined  14-12-2015
 
 
 
19 December 2015 19:42
 
franz kiekeben - 12 December 2015 05:39 PM
icehorse - 12 December 2015 07:52 AM

I don’t think you’re accurately representing Harris at this point. He’s very clear that there can be many peaks on TML.

I’m aware that Harris claims there can be multiple peaks on the landscape, but I don’t see how that’s relevant here. Suppose (to make this simple) that there are two courses of action one can take in a given situation that lead to the greatest amount of well-being possible. Well, then on Harris’s utilitarian theory the right thing to do is either one of these two, and it cannot possibly matter which. You may as well flip a coin. All other courses of action, however, are less than optimal.

But the question under discussion here is whether one can (at least in principle) determine the (or even a) course of action that is objectively right to pursue when one knows all the facts. I’m saying there isn’t: I’m saying someone can rationally hold that some course of action other than the two highest on the moral landscape is the right one to pursue. And neither Harris nor anyone else has demonstrated that such a course of action would be objectively wrong.

Given that it’s impossible to determine the peaks with any degree of certainty the question is irrelevant. The Butterfly Effect must be taken into account in a discussion of this sort. Here’s what I mean. Suppose one course of action causes a hurricane that wipes out a very large heavily populated region. The greatest good could very well have been along the path not chosen. Suppose the other path led to a raging forest fire that devastated thousands of properties and left hundreds dead. Neither of these consequences could have been predicted. We have no way to determine the long term consequences of any of our actions. Even the very least of them can have extremely large consequences for the good, or for the bad.  It’s imperative, IMO, that we find an objective basis for morality if we are to make a science out of it. Though this is agreed upon by Harris, he has not done that yet. Good and Evil (Bad) are purely subjective, whether he chooses to believe that or not, and even assuming that they are objective we still have the inescapable problem of unpredictable long term consequences.

 

[ Edited: 19 December 2015 20:07 by Nom de Plume]
 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3337
Joined  26-01-2010
 
 
 
20 December 2015 03:00
 
Nom de Plume - 19 December 2015 07:42 PM

...

Given that it’s impossible to determine the peaks with any degree of certainty the question is irrelevant. The Butterfly Effect must be taken into account in a discussion of this sort. Here’s what I mean. Suppose one course of action causes a hurricane that wipes out a very large heavily populated region. The greatest good could very well have been along the path not chosen. Suppose the other path led to a raging forest fire that devastated thousands of properties and left hundreds dead. Neither of these consequences could have been predicted. We have no way to determine the long term consequences of any of our actions. Even the very least of them can have extremely large consequences for the good, or for the bad.  It’s imperative, IMO, that we find an objective basis for morality if we are to make a science out of it. Though this is agreed upon by Harris, he has not done that yet. Good and Evil (Bad) are purely subjective, whether he chooses to believe that or not, and even assuming that they are objective we still have the inescapable problem of unpredictable long term consequences.

I’m of the opinion that all moral values were developed “scientifically”, in the very broad sense. This is because I consider science to be a formal extension of inductive reasoning, which is an abstract mechanism that also applies to the processes of biological and cultural evolution. So, the first step of any moral science can be observational. This is currently happening. Moral engineering is what Sam proposes, but before doing any kind of engineering it is usually a good idea to have a good grasp of the science pertaining to what is to be engineered. This we do not currently have, because we are still in denial that the study of morality needs science.

“Good” and “evil” are subjective values, but they align to cultural values for most humans. Humans are divided into many groups, among which the culturally-correct subjective valuations “good” and “evil” differ, but within which they are substantially the same. Despite the inter-group differences, there are probably many valuation correspondences across cultural groups. These correspondences probably are good places to look for anchor points for pan-cultural morality. Human actions that don’t have a high correlation of valuation across cultures are probably good places to look for accidental variations that are either caused by specific cultural traits that are themselves independent of good-evil valuation, or traditional-historical accidents such as group identifiers that have no moral effect other than to help define group membership and boundaries (i.e., tribal delimiters).

 
 
Nom de Plume
 
Avatar
 
 
Nom de Plume
Total Posts:  93
Joined  14-12-2015
 
 
 
20 December 2015 06:38
 
Poldano - 20 December 2015 03:00 AM
Nom de Plume - 19 December 2015 07:42 PM

...

Given that it’s impossible to determine the peaks with any degree of certainty the question is irrelevant. The Butterfly Effect must be taken into account in a discussion of this sort. Here’s what I mean. Suppose one course of action causes a hurricane that wipes out a very large heavily populated region. The greatest good could very well have been along the path not chosen. Suppose the other path led to a raging forest fire that devastated thousands of properties and left hundreds dead. Neither of these consequences could have been predicted. We have no way to determine the long term consequences of any of our actions. Even the very least of them can have extremely large consequences for the good, or for the bad.  It’s imperative, IMO, that we find an objective basis for morality if we are to make a science out of it. Though this is agreed upon by Harris, he has not done that yet. Good and Evil (Bad) are purely subjective, whether he chooses to believe that or not, and even assuming that they are objective we still have the inescapable problem of unpredictable long term consequences.

I’m of the opinion that all moral values were developed “scientifically”, in the very broad sense. This is because I consider science to be a formal extension of inductive reasoning, which is an abstract mechanism that also applies to the processes of biological and cultural evolution. So, the first step of any moral science can be observational. This is currently happening. Moral engineering is what Sam proposes, but before doing any kind of engineering it is usually a good idea to have a good grasp of the science pertaining to what is to be engineered. This we do not currently have, because we are still in denial that the study of morality needs science.

“Good” and “evil” are subjective values, but they align to cultural values for most humans. Humans are divided into many groups, among which the culturally-correct subjective valuations “good” and “evil” differ, but within which they are substantially the same. Despite the inter-group differences, there are probably many valuation correspondences across cultural groups. These correspondences probably are good places to look for anchor points for pan-cultural morality. Human actions that don’t have a high correlation of valuation across cultures are probably good places to look for accidental variations that are either caused by specific cultural traits that are themselves independent of good-evil valuation, or traditional-historical accidents such as group identifiers that have no moral effect other than to help define group membership and boundaries (i.e., tribal delimiters).

I’ll give that a big thumbs up. I posted something very similar in another thread last night.

 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2895
Joined  19-01-2015
 
 
 
20 December 2015 07:06
 

Preamble:
Landscapes are the mathematical descriptions of equations with multiple variables. When there are few parameters, we can exhaustively describe this landscape and know exactly which parameter values are required to reach the peaks and pits of the landscape.

But when such equations become more complex, numbering hundreds of parameters, we can no longer find those peaks and pits just by looking at the formula (not to mention that we can not envision a 2000-dimensional surface) - we need search algorithms, time and luck to reach points of sufficient elevation. The algorithms are optimization strategies based on the shape of the landscape at a specific point, i.e. how does the terrain curve along each parameter axis. These algorithms have names such as ‘Steepest Descent’ or ‘Newton-Raphson’ etc. and can be very good in finding the quickest route to the nearest peak (or pit). They can, however, not tell you if that extreme point is the highest (lowest) one on the entire landscape (global) or only in the neighborhood from where you started (local).
Other search algorithms randomly jump across the landscape to find points of interest - hence they are called Monte Carlo- Methods.

main point:
Any description of a Moral Landscape would be so high dimensional that we would have no hope of finding the peaks just by guessing - which is what we have done in society so far.
What would be required would be meticulous record keeping on the current state on well-being across time and as many parameters as possible. Only then suggest the best policies and could tell if these policies bring us closer or further way from a world optimized for human well-being.

 
 
Nom de Plume
 
Avatar
 
 
Nom de Plume
Total Posts:  93
Joined  14-12-2015
 
 
 
20 December 2015 07:53
 
Twissell - 20 December 2015 07:06 AM

Preamble:
Landscapes are the mathematical descriptions of equations with multiple variables. When there are few parameters, we can exhaustively describe this landscape and know exactly which parameter values are required to reach the peaks and pits of the landscape.

But when such equations become more complex, numbering hundreds of parameters, we can no longer find those peaks and pits just by looking at the formula (not to mention that we can not envision a 2000-dimensional surface) - we need search algorithms, time and luck to reach points of sufficient elevation. The algorithms are optimization strategies based on the shape of the landscape at a specific point, i.e. how does the terrain curve along each parameter axis. These algorithms have names such as ‘Steepest Descent’ or ‘Newton-Raphson’ etc. and can be very good in finding the quickest route to the nearest peak (or pit). They can, however, not tell you if that extreme point is the highest (lowest) one on the entire landscape (global) or only in the neighborhood from where you started (local).
Other search algorithms randomly jump across the landscape to find points of interest - hence they are called Monte Carlo- Methods.

main point:
Any description of a Moral Landscape would be so high dimensional that we would have no hope of finding the peaks just by guessing - which is what we have done in society so far.
What would be required would be meticulous record keeping on the current state on well-being across time and as many parameters as possible. Only then suggest the best policies and could tell if these policies bring us closer or further way from a world optimized for human well-being.

That’s why the moral code must center around the individual. Good and Bad are really just different words for Pleasure and Pain. That’s what it breaks down to, and these occur on an individual basis. That is, we don’t collectively feel pain. Being empethetic is certainly not the same as actually experiencing these sensations. The Pleasure/Pain response is also central to an individual’s instinctive drives, which are quite objective. Morality, then, has it’s roots in the psyche of the individual, not in the collective conscious of a society.

In the sense of their psychological link to pleasure and pain,  good and bad also take on objective form, that is, they truly exist for the individual. As suggested elsewhere by myself, and now apparently also by Poldano, when you have a society full of like-minded individuals (a common culture), then what is good for one individual will ideally be good for all of the other individuals as well, or restated, it should theoretically be good for society in general.  In logical terms A(individual’s values) then Z(moral code), A=B,C,D,E(other members of closed society) thus B,C,D,E then Z.

This “moral certainty”  (as opposed to moral relativism) would require a general consensus. The objectivity of a moral code under these circumstances would be simply that it is itself an objective thing, i.e., it exists! In a society like the United States, we have people from many cultures, and some of the values of sub groups are simply incompatible with the values of other sub groups. Morality in this system is not objective, because it is self contradictory, i.e. it cannot be defined, or pinned down. There is no consensus. There is no moral certainty, and thus it does not exist, and for that reason does not achieve objectivity. There is only the objective individual’s feelings of good and bad.  When we apply these precepts to the real world we can see them in vivid in action. There is constant strife between indivuduals and between sub cultures, and the lack of moral consensus is the root cause of that. It seems quite obvious to me what needs to be done to eliminate all of these disputes and skirmishes.  Isn’t it obvious in hindsight that forcing people together who have incompatible values is a recipe for disaster? How hard is that? Do we honestly believe that it’s the other person’s duty to conform to our mores?  Why doesn’t he/she have the same right to demand that it is we who change over to their mores? Yet we in America continue blindly down this insane path, convinced that new legislation will magically fix it. Absurd! 

 

[ Edited: 20 December 2015 08:18 by Nom de Plume]
 
sojourner
 
Avatar
 
 
sojourner
Total Posts:  5970
Joined  09-11-2012
 
 
 
20 December 2015 08:33
 
Nom de Plume - 20 December 2015 07:53 AM

This “moral certainty”  (as opposed to moral relativism) would require a general consensus. The objectivity of a moral code under these circumstances would be simply that it is itself an objective thing, i.e., it exists! In a society like the United States, we have people from many cultures, and some of the values of sub groups are simply incompatible with the values of other sub groups. Morality in this system is not objective, because it is self contradictory, i.e. it cannot be defined, or pinned down. There is no consensus. There is no moral certainty, and thus it does not exist, and for that reason does not achieve objectivity. There is only the objective individual’s feelings of good and bad.  When we apply these precepts to the real world we can see them in vivid in action. There is constant strife between indivuduals and between sub cultures, and the lack of moral consensus is the root cause of that. It seems quite obvious to me what needs to be done to eliminate all of these disputes and skirmishes.  Isn’t it obvious in hindsight that forcing people together who have incompatible values is a recipe for disaster? How hard is that? Do we honestly believe that it’s the other person’s duty to conform to our mores?  Why doesn’t he/she have the same right to demand that it is we who change over to their mores? Yet we in America continue blindly down this insane path, convinced that new legislation will magically fix it. Absurd!


There was an episode of South Park that dealt with the kind of uber-segregation you’re envisioning. Every group and species lived, literally, on their own planet. The thing is that, in reality:


1. Groups with mutually incompatible values can’t go live on separate planets, and what they instead end up doing is fighting each other endlessly. If “separate but equal” vs. pluralism was really better for human and well-being, history would have shown this a long time ago. It most decidedly hasn’t.

2. The idea that a group can “go off on their own” and just live happily doesn’t factor in the role of environment in human morality. Morality is going to change based on environmental circumstances. I’m pretty confident in saying that no group, ever, in all of history, has simply lived by the literal interpretation of their core doctrines and mores with no environmental influence. So, divide humans into camps and soon those camps will want to divide into camps, and those camps will want to divide into camps, and on and on and on. (Now, if you do that all the way down to every individual being, perhaps you get back to something like the anarchist concept of “free association”, but honestly, I don’t know enough about what that is or how it works to speak about it. There may be an element of voluntary segregation within that concept but I think there are important differences that differentiate it, in terms of it being free, and not enforced - again, not something I know much about, though, so I can’t speak to whether or not I find that system ethical or not).

 
 
Nom de Plume
 
Avatar
 
 
Nom de Plume
Total Posts:  93
Joined  14-12-2015
 
 
 
20 December 2015 10:18
 
Niclynn - 20 December 2015 08:33 AM
Nom de Plume - 20 December 2015 07:53 AM

This “moral certainty”  (as opposed to moral relativism) would require a general consensus. The objectivity of a moral code under these circumstances would be simply that it is itself an objective thing, i.e., it exists! In a society like the United States, we have people from many cultures, and some of the values of sub groups are simply incompatible with the values of other sub groups. Morality in this system is not objective, because it is self contradictory, i.e. it cannot be defined, or pinned down. There is no consensus. There is no moral certainty, and thus it does not exist, and for that reason does not achieve objectivity. There is only the objective individual’s feelings of good and bad.  When we apply these precepts to the real world we can see them in vivid in action. There is constant strife between indivuduals and between sub cultures, and the lack of moral consensus is the root cause of that. It seems quite obvious to me what needs to be done to eliminate all of these disputes and skirmishes.  Isn’t it obvious in hindsight that forcing people together who have incompatible values is a recipe for disaster? How hard is that? Do we honestly believe that it’s the other person’s duty to conform to our mores?  Why doesn’t he/she have the same right to demand that it is we who change over to their mores? Yet we in America continue blindly down this insane path, convinced that new legislation will magically fix it. Absurd!


There was an episode of South Park that dealt with the kind of uber-segregation you’re envisioning. Every group and species lived, literally, on their own planet. The thing is that, in reality:


1. Groups with mutually incompatible values can’t go live on separate planets, and what they instead end up doing is fighting each other endlessly. If “separate but equal” vs. pluralism was really better for human and well-being, history would have shown this a long time ago. It most decidedly hasn’t.

2. The idea that a group can “go off on their own” and just live happily doesn’t factor in the role of environment in human morality. Morality is going to change based on environmental circumstances. I’m pretty confident in saying that no group, ever, in all of history, has simply lived by the literal interpretation of their core doctrines and mores with no environmental influence. So, divide humans into camps and soon those camps will want to divide into camps, and those camps will want to divide into camps, and on and on and on. (Now, if you do that all the way down to every individual being, perhaps you get back to something like the anarchist concept of “free association”, but honestly, I don’t know enough about what that is or how it works to speak about it. There may be an element of voluntary segregation within that concept but I think there are important differences that differentiate it, in terms of it being free, and not enforced - again, not something I know much about, though, so I can’t speak to whether or not I find that system ethical or not).

Yes, there may be a perpetual dichotomy between theory and practice. I don’t know that for a fact though, because history doesn’t always constrain the future. With psychological/behavioral data streaming in we could some day potentially objectify right/wrong, universally, to a sufficient level as to break free of all superstition, as a species, thereby bringing everyone into the same fold.

Admittedly that won’t occur in the near future, but I can’t rule it out as a real possibility. The point is that it certainly won’t happen if we don’t believe that it’s possible, and I for one believe firmly that it is.

Your point about “segregated cultures on different planets continuing to war with one another,” this assumes that those people are all from present day Earth. If however they are enlightened about the stupidity of warring, understanding that they are committing a fallacy in trying to convert each other to their own mores, then they woud not continue to war with one another. They would instead embrace their differences, understanding that moral certainty can exist only within groups, such as those groups that they compose. From this they can deduce that they each have no logical grounds to condemn the behaviors of people on the neighboring planets.

There may be other reasons for them to war with one another, perhaps due to something as simple as lack of food due to a worldwide drought. But if each planet is self sufficient, there should be no reason for them to ever go to ethnic war with one another.


 

[ Edited: 20 December 2015 10:56 by Nom de Plume]
 
 < 1 2 3 4 >  Last ›