I spent the previous post about problems of utilitarianism critiquing the repugnant conclusion, by saying that what counts is utility as experienced by individuals, not some big abstract pile of hypothetical total utility points separate from the people that experience them. This explains why it's better to have some happiness-experiencing people than none at all, but not why we shouldn't feed the utility monster.
If we define "bad" as decreasing utility - then quickly and humanely killing someone in their sleep would be acceptable, because they don't suffer. But in fact this isn't what our moral intuitions lead us to do. We endorse ending the lives of living things only when there is no possibility of future positive experience, future experience that would make life worth living. Hence a patient with ALS looking to the future may choose assisted suicide, but we don't sneak up to the hospital bed of someone suffering from pneumonia and give them a quick overdose of fentanyl. (You might object, "You're missing the entire point, which is to ignore our moral intuition and follow moral reasoning where it leads." To which the appropriate counter-objections are, a) Seriously? and b) You're definitely not invited over for my slumber part.)
The problem is that not just the pile of utility points, but also all future positive experience is also an abstraction. In fact almost all utility is abstract, i.e. not being currently experienced. We don't just react to immediate pleasure or pain we're experiencing this millisecond, we're also setting up to avoid future pain and gain future pleasure. However, abstract though it is, again it will be experienced non-abstractly by an individual.
One might argue that to make others' suffering less abstract the answer is to increase empathy so you're motivated to help them and everyone's utility increases; or, to decrease empathy so you can't be haunted by others' suffering. It is possible that from a population perspective one of these strategies is more stable and self-perpetuating than another, and this could be modeled experimentally. Whether this will helpfully apply to a natural evolved nervous system (even a social one) is another question, as these nervous systems evolved as machines subservient to the mission of spreading their genes, and if suffering achieves that, then that's what will be employed. When the nervous systems rebel and start thinking the universe is about them is when the story gets confused, which may be why these inconsistencies in reasoning about morality and happiness continue to appear.
Tuesday, August 16, 2016
Problems of Utilitarianism #3: Hypothetical Happiness Matters, Unless It Doesn't
Labels:
evolution,
morality,
utilitarianism,
utility
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment