Saturday, May 29, 2010

Maximization: The Heroin Problem of Happiness-Based Morality

A problem of any happiness-maximizing theory of morality is "the heroin problem". If the point of morality is to increase pleasure (for self or for the greatest number, i.e. utility theories), shouldn't our goal be to create as much pleasure as possible, even if that pleasure is created by gaming the system (e.g., with heroin or a Matrix-like simulation)?

There are a few ways to think about this problem.

Resolution #1: Destroy values that obstruct pleasure maximization. In a parallel development, I have endeavored to destroy my taste in wine because by developing a taste in wine or anything else, you're working against yourself - you're actually making your marginal unit of pleasure more expensive. You have to choose which is the better option: having a refined taste, where you drink an expensive wine, experiencing X pleasure points, and signal your refinement to peers; OR tasting a cheap wine and not knowing any better, so you also experience X pleasure points, and also you have $50 dollars in your pocket to buy 5 more bottles of wine (so you actually get 6X pleasure points; unless the admiration you get from peers has some combination on the spectrum between being worth $50 or is 5 times more pleasure points, you're better off with no taste in wine.

Expanding this approach to happiness-maximizing morality in general, certainly it's the rare human whose moral intuition drives them toward hedonic excess at the expense of all other values. However, perhaps we're still laboring under bad and unquestioned moral assumptions which are after all not innate to human beings, and in fact it should be our goal to identify and do away with all values, beliefs and behaviors that get in the way of optimum utility. For example, most of us recoil at the suggestion that empathy should be eliminated because it conflicts with the pursuit of utility, but perhaps our outrage at such a suggestion is an example of a bad moral assumption. The anti-ascetic of the future will gladly pluck out (for example) the brain circuits that create a desire to care for his/her offspring, as clear offenders to the unflinching goal of increasing happiness.

Resolution #2: Heroin and orgasms aren't the only things that bring about happiness. There are certainly multiple types of experiences that lead to happiness beyond that of physical pleasure; again, if morality is really and only about happiness, the goal should be to identify those types of experiences, the realization of which conflict with each other, and destroy the desire/capacity for conflicting experiences which are in the minority. And presumably a simulation could provide not just heroin rushes and orgies with supermodels, but all the higher hedonic forms, up to and including a sense of meaning: professional achievements, family ceremonies, etc.

Resolution #3: It's the capacity for current and future happiness that matters. Neurologically gaming the system leads to an organism vulnerable to predation and disease; is ten years in heroin-simulation land until your body dies of dehydration better than sixty years in the real world? If asking to be hooked up to a pleasure simulation and left there until you die is wrong, why? This resolution is suspect because it usually conveys some degree of knee-jerk disgust for the incapacitated agent who has diminished their contact with reality in exchange for the equivalent of neurochemical masturbation. Again, see #1 above: this disgust for voluntarily putting oneself in such a passive position is certainly an obstacle to realizing a life of greater pleasure. Also test claimants of this resolution by asking: what if the simulation were built by aliens who guarded it and made absolutely sure the deteriorating, drooling simulation zombies inside the simulation (you!) were 100% safe - i.e., capacity for future happiness is now not a problem. Do you still have a problem with giving yourself over to such a simulation? If so, then future happiness capacity is not your real demand.

Resolution #4: Our moral sense is not entirely predicated on happiness. Our behavior is certainly not 100% rational or conscious-principle-guided by any means, so why do we think that our moral sense would different?

7 comments:

dbonfitto said...

You can't game the system from within the system; you've got to have outside knowledge and influence.

For some people, happiness is derived from the illusion that they're gaming the system. Zooming out, they're just part of a bigger system in which Einstein and Gödel are playing paddleball with the system (paddleball itself being another system.)

If the aliens do a 'Matrix' job on us, the question of our happiness is pointless. The question becomes, "how are these little green dudes maximizing their happiness?" Somebody is still picking the poppies, maintaining the nutrient bath, and cleaning up the junkie poop. Outsourcing the external costs doesn't make them go away.

Michael Caton said...

Are you saying that overall happiness is not affected since even though the humans are happy and stoned, the aliens are still expending effort to maintain the state? That's true but I'm concerned here with the pleasure experienced by the individual, so in principle as long as the individual is happy, a holocaust could be going on around him or her and for purposes of this moral question (individual pleasure optimization) it wouldn't matter. Happiness is not pointless unless you argue that there's no such thing as subjective experience, which of course some people do, and they're goofy.

dbonfitto said...

If you're concerned with individual happiness, but not with anything else, why not just OD on some sort of happiness drug and die?

Does the duration of being blissed out matter?

If it does, then you need to have fantastic sex on the event horizon of a black hole. Done and done.

Michael Caton said...

You have identified exactly the problem. It seems to me (and to pretty much everyone) that stimulating reward centers cannot be the purpose of morality, else the answer would in fact be to find super-heroin. I just cannot think of a clear argument of any sort why super-heroin is not The Way.

dbonfitto said...

Mike,

The founders of our country (at least TJ) had a pretty good handle on it. I think some form of balance between "life, liberty, and the pursuit of happiness" covers it pretty well.

Super-Heroin covers the happiness, but ruins the first two.

Life: Without life, the other two are moot. (Maybe substitute 'sentience' so Skynet will feel like it's part of the happiness problem when the time comes, eh?) We want security and health.

Liberty: Free will is a huge philosophical question. Even if we can't prove we've got it, we want it. People generally want to do their own thing, even if that thing is to follow a group. Without freedom, life is slavery and happiness is a gilded cage.

The Pursuit of Happiness: You love to run. I don't even like to watch people running. Just hitting the happy button in your brain (or at least a brain that's been out in the world, I don't know how it would work in a baby who has never known anything but straight-up happy buttoning) doesn't work because most people have a good sense of what the other brain states should be when the happy button is pressed. It feels hollow. Also, different people have different requirements for those surrounding brain states when they get happy.

Anyway, what I'm trying to say is that maximizing any one leg of the stool of Life, Liberty, and the Pursuit of Happiness without maximizing the others unbalances it. To maximize overall happiness, (I think we would call this process 'morality.') you've got balance a lot of stools over a lot of rocky ground.

Michael Caton said...

There do seem to be conflicting behavioral drives and it's unclear how conflicts are resolved, but our behavior is increasingly plastic (chimps don't have these kinds of moral discussions, and neither did Sumerians). It's also interesting that it's so difficult to formulate morality propositionally as has been repeatedly tried since the Enlightenment. I take the cynical view that feeling good or bad about a moral act is just the result of a reward center, same as sex or sugar, and we don't try to come up with propositional systems to describe our appetites.

dbonfitto said...

Don't talk with many militant vegans, do you Mike?

Consider also: we've got way more time to think about things other than our empty bellies than bacteria, chimps, ancient Sumerians, etc.

You can't work on complex happiness pursuit if you don't have some solid life and liberty to support it.