This is merely a repost of the question I asked Sam on his AMA. If any economists or game theory experts have insight, please indicate your qualifications: ” You say in your book Free Will, “The illusion of free will is itself an illusion.” Might our intuitions regarding collective preferences be similarly illusory? That is, could aggregating preferences in the way utilitarians must, be something that doesn’t make sense, even in principle? Arrow’s impossibility theorem makes precisely this claim in regards to social choice theory. There seems to be no reason Arrow’s conclusion could not easily be ported to utilitarianism. To be clear, this would essentially dismantle all utilitarian or consequentialist theories.”
EDIT: For a laymen’s terms explanation of Arrow’s Theorem: https://goo.gl/MGV8m8
Ok, so no one with the relevant expertise (or at all) has thus far chimed in. I’ve done quite a bit of research since this initial post. Interestingly, Arrow himself was a Utilitarian (as are most social welfare economists it turns out). The rub is, the consequences of accepting Arrow’s theorem are as dire as I said in my initial post, unless you either weaken Arrow’s fairness criteria, or reject the following: “preference orderings contain no information about each individual’s strength of preference or about how to compare different individuals’ preferences with one another. Statements such as ‘Individual 1 prefers alternative x more than Individual 2 prefers alternative y’ or ‘Individual l prefers a switch from x to y more than Individual 2 prefers a switch from x* to y*’ are considered meaningless.” More succinctly this is referred to accepting utility as inter-personally comparable. Neither solution seems reasonable.
To be clear, in order to preserve Utilitarianism or any consequentialist philosophy, you must deny one of the following:
(1) If every member of a group prefers X to Y, the group prefers X to Y.
(2) If the group prefers X to Y and Z enters the race, the group prefers X to Y.
(3) There is no dictator who decides for the the group X over Y.
(4) There is no way to compare my preference for X over Y, to you preference for X over Y in any reliably quantifiable and measurable sense.
The first three (Arrow’s theorem) seem to be non-negotiable requirements we would have for any system of aggregating preferences. The final intuition seems to be required definitively by the inherently subjective nature of conscious experience. That is, none of these seems up for grabs. Would love to hear a response, from anyone at all. Again, a laymen explanation of Arrow’s theorem is sourced above and here is my source regarding interpersonal comparability (see last paragraph on Arrow’s theorem and first paragraph of welfare aggregation): https://plato.stanford.edu/entries/social-choice/#ArrThe
Again, I want to reiterate that this is not a reductio ad absurdum of Utilitarianism in the style of most criticisms. It’s instead a direct undermining of what Utilitarian presupposes it is possible to do in principle. Please, anyone, confirm I’m not screaming into the void.
I believe that your question is somewhat misleading for two reasons:
1. Firstly, it is clear that both free will and the feeling of preference (or inclination towards something) are both subjective experiences. However, they are different categories of subjective experiences. I would argue that collective preferences are not illusory in the way that free will is illusory. Just ask yourself, what is an illusion? It is a false idea/belief/impression. So, if you feel like you have free will, but in reality you know that’s not the case - that is an example of an illusion. Whereas, preferences, even though they are subjective, aren’t misleading or invalid in the same way. If I don’t like cinnamon, I know that it is a subjective fact about my experience. There is nothing you can tell me to convince me to like cinnamon. You either do or you don’t. Therefore, by definition tastes or preferences cannot be “illusions” in the same way that free will is.
2. The second point I would like to raise is that our ethical behavior is based ENTIRELY on our preferences. By this I mean, you cannot force someone to be moral. If consequences of our actions are to be of any moral significance, then those actions must be performed voluntarily. In essence, then, when humans behave ethically we are doing so because we want to, which is to say that we are acting out of preference.
Therefore, I must say that if you try to erode away facts about our subjective experiences as illusions, then yes, ethical discussions become practically impossible. But I see no reason or basis to do that. On the other hand, as Sam elaborates in his book, accepting free will as an illusion based on the nature of our brains and behavior actually expands the platform for important ethical discussions.
I’m probably missing some of the background but I’ll take a stab for the sake of conversation.
Utilitarianism, from my reading, does not aggregate preferences. It attempts to isolate utility. Preferences can be nested within utility with a bit of imagination but utility does not reduce to preference. The project is not democratic in this way. It is attempting to lay a logical structure onto needs. If the total utility of the group demands a course of action that the consensus opinion of the group does not affirm we are still obligated to act upon the former… on this model.
Preferences are subjective in the sense that they are private experiences but they are also a product of an evolutionary narrative. (I think) We like salt and fat and sex and the experience of victory and whatever we like because because of our common heritage. Our preferences tend to converge in practical experience because we are social animals. Expound on this line for as long as necessary but I think that any valid moral theory must converge with science.
As you point out there is a gap in moral theory. A leap that says one is entitled to make anything like a moral judgment in the first place. There is no foundation outside of our intuition or at least none that is apparent to me.
I feel like most any moral theory we can devise will be subject to a similar set of paradoxes. I feel like classic models are not so much wrong as they are incomplete. Ok, some are just wrong but many can be considered as facets. I think that are many concepts within utilitarianism that we should salvage and integrate.
It seems to me that free will is subtly different in that it is a general statement about the backdrop of reality based on determinism; whereas morality is based in part on the capacity of human minds to move forward and backward - in hypothetical terms - in time.
To clarify - imagine a sentient being with conscious experience but absolutely no ability to relate to timelines - either in the form of learned responses from the past or simple approach / avoidance behaviors in behavioral anticipation of the future. I don’t actually think any such organism exists, but if it did, to me it is clear that it would have no morality. It would exist in a fleeting whirlwind of whatever happened to be happening at that exact moment.
Organisms with a sense of time, however, base morality around future outcomes - both for themselves and for others. Even people on the very laid back end of the spectrum do this all the time. I work with special needs children, and sometimes I find their various quirks so endearing that I’m not especially motivated to change them. A need to break any DVD in the room into exactly two pieces. A preference for eating my eyelashes. Affectionate head pats that feel like being the object of whack-a-mole due to poor motor modulation. But even if you tend heavily towards the accepting “aaaw, everything you do is cute” end of the spectrum, you realize that the moment you are currently in does not last forever. Children grow up and end up in future environments, where people may not be as charmed by such quirks, and so we shape their behaviors in expectation of this.
The issue with aggregating preferences, I think, is that preferences are at least to some degree malleable. If we were simply dealing with a conglomerate of preferences, then we would be a human hurricane that sort of played out according to the laws of physics. I think morality comes in (in part, there are other factors, of course,) where humans anticipate future states and preferences and hypothesize about how to shape themselves and their fellow humans accordingly (Some in absolutely dictatorial ways, as in the case of, well, dictators; some in the most laid back, but again, I think we all do this to at least some degree. If a parent lets there child grow up feral, moved by nothing but momentary preferences, then we charge them with negligence.) And of course we have different hypotheses about what will lead to what outcomes, and even if we learn the outcome of a given hypothesis under one set of conditions, it does not speak to the outcome of that hypothesis under any set of conditions (look at how long democracy was considered a pipe dream, historically, for example - often with evidence to back this up, when it emerged under different historical conditions. And yet this did not hold true forever.)
Of course if you want to back way way waaaaay up, you can say that a lack of free will applies to everything, including the various ideas and hypotheses that people hold. I think this is more or less true, although I don’t know if viewing it from such a zoomed out level is always useful in discussing morality. I think it’s useful in not blaming people for holding ideas that you think are bad ideas, but when discussing the ideas themselves, you don’t have to add the addendum “And of course they didn’t ultimately choose to have this idea!” each and every time. You can zoom in a bit more and simply focus on likely outcomes.
As an aside, where I disagree with utilitarianism is that I think any totally linear ‘plan’ for morality turns into an Orwellian loop that circles back around and eats its own tail eventually. It’s similar to the idea in Buddhism (and other places) that ironically, you do not obtain happiness by chasing happiness. Focusing on the greatest good for the greatest number pretty much destroys the basis for human rights, which in turn leads away from the greatest good for the greatest number. I think, in practical terms, we need a sort of homeostatic balance between utilitarianism and the equal valuing of all lives, and every individual’s right to pursue happiness. The greatest good for the greatest number is completely unacceptable if it means the greatest possible misery for a small minority, for example.