Previous problem of utilitarianism here. Next problem of utilitarianism here.
The Rawlsian conception of a just society is incompatible with Parfit's extension of utilitarianism.
Rawls claimed that a just society was necessarily a very egalitarian one. His argument was that if you were going to be placed into a society without knowing ahead of time what your role would be, if you're smart, you would want a society where there's not much difference between the guy at the top, and the guy at the bottom. That is to say: sure it would be a blast to be a plantation-owner in the antebellum American South, but if you fell out of the sky at random into a role in that society, chances are much greater you'd end up as a slave or tenant farmer breaking your back for one of the plantation owners.(1)
Parfit extended utilitarianism by saying that if we want the greatest good for the greatest number, we should want not just more happiness, but more people. The equation is average happiness of each person * # people = total amount of happiness. In this view, having more people to be experiencing some happiness can even counterbalance the amount of happiness that each person is experiencing. Another way of saying this: if utilitarianism is the greatest good for the greatest number, don't neglect the "number" part.
The full elaboration of this claim runs counter to most people's moral intuitions and lead to what's known as the repugnant conclusion (summarized below).
Imagine two societies: a society of a million people who have the best lives possible, whose lives are 99% worth living. (I don't know, sometimes it's cloudy when they go to the beach, otherwise life is perfect.) Compare that to a society of a hundred million whose lives are only 1% better than death: they groan each day under the oppressive weight of a dictatorship, but sometimes see a nice flower, which keeps them from wanting to kill themselves.(2) Because 99% * a million is less overall happiness than 1% * a hundred million, the repugnant conclusion according to Parfit's interpretation of utilitarianism, is that it's better to have the much bigger, much less happy society.
The obvious rejection is that an individual experiences individual happiness - total happiness is not something that is experienced - and the individual experience of objective happiness is what matters. Of course, if you make that claim, you're arguing against utilitarianism.
To illustrate Parfit's repugnant conclusion concretely in contrast to Rawls, let's apply it to a real historical situation, the concrete example of black slavery in the United States. Of course the QALY (quality-adjusted life years) measurements for utility will necessarily be a little fudged. On the eve of the American Civil War in 1860, the census showed 3,953,761 slaves in the United States. Let's round that up to four million and assume these people had lives 1% worth living(3)
(after all they're literally in the horrible dictatorship that I described above.) [Added later: the very next day after I wrote this post, I ran across Robin Hanson's blog post "Power Corrupts, Slavery Edition" which contains the statement "US south slave plantations were quite literally small totalitarian governments".] Now let's compare that to Avalon on the California island of Catalina. Ever been there? It's really nice, as you might expect, and has a population of just under 4,000, and while it's not completely egalitarian, you can't be bought or killed with impunity. It's a really nice place, so let's assume there's 99% average happiness. Parfit concludes that it's better to have that slave society than modern Avalon.
By Parfit's interpretation of utilitarianism, the problem is not the institution of slavery's impact on quality of life, as long as we can overcome this by having enough slaves. Rawls could never recommend choosing a slave society over a non-slave society ("well how big a slave society is it?" the repugnant conclusion says you should ask.) By Rawls (and most of our intuitions) the answer of which you would rather be randomly thrown into is obvious, and wholly contingent on whether moral value comes from some abstract total register of utility points, or the experienced utility of an individual human being. Since policy makers do these calculations to make decisions, this absurd conclusion could conceivably make a difference, and some respected thinkers (Bryan Caplan and Michael Huemer among them) have argued that our intuitions are wrong.
Of course the counterargument is: if individually experienced utility is all that matters, isn't it better to have one really happy person then two ho-hum people? Shouldn't we feed the utility monster then? I don't know, other than to say fatalistically, that possibly moral reasoning either is not a real process, and that we are unable to make decisions like this about groups of people that we do not know. Which would be terrible, considering that modern societies are forced to do so all the time. But it would be consistent with Adam Smith's thought experiment about losing a single joint of a finger versus an earthquake in China that kills a hundred thousand. Humans cannot reason about abstract people as moral agents, because we did not evolve with a need to do so - other than as threats or trade partners.
1. Rawls also suffers from the problem of differing agents: assume that someone doesn't care about relative status, only absolute comforts. If such a person gets his head frozen and wakes up in a future where there are absolute un-displaceable overlords but who give them amazing experiences and material comfort, that person might not care, even though someone else might chafe under such an uber nanny-state regime. I also wonder how meaningful the question of a choice can be, because there is no neutral position to choose from and all are habituated to the specifics of times and places. I.e., to me England appears a nightmarish dystopia but the people I've met from there seem to be reasonable people who enjoy their lives and even return there voluntarily, so who knows.
2. If you think assigning numbers to such situations is spurious and academic, I'm afraid I must inform you that they are very concrete and very real-world, as health systems use units of DALYs and QALYs all the time to make decisions. And some systems do assign negative values, meaning that some conditions are considered to make life not worth living, i.e. they are literally worse than death.
3. I tried to look up the suicide rate for slaves, as this would give an idea of how many slaves thought their life was not worth living. Although I couldn't find numbers, apparently suicide was unexpectedly rare, and the threat of execution by owners would not have been an effective deterrent for slaves who
thought continued slavery was worse than death. In several places (e.g. here) I saw an article referenced: David Lester, Center for the Study of Suicide, "Suicidal Behavior in African-American Slaves," Omega: Journal of Death and Dying, 37:1 (1998), 1-13.
New edition of "Rationality: From AI to Zombies"
8 hours ago