A problem of any happiness-maximizing theory of morality is "the heroin problem". If the point of morality is to increase pleasure (for self or for the greatest number, i.e. utility theories), shouldn't our goal be to create as much pleasure as possible, even if that pleasure is created by gaming the system (e.g., with heroin or a Matrix-like simulation)?
There are a few ways to think about this problem.
Resolution #1: Destroy values that obstruct pleasure maximization. In a parallel development, I have endeavored to destroy my taste in wine because by developing a taste in wine or anything else, you're working against yourself - you're actually making your marginal unit of pleasure more expensive. You have to choose which is the better option: having a refined taste, where you drink an expensive wine, experiencing X pleasure points, and signal your refinement to peers; OR tasting a cheap wine and not knowing any better, so you also experience X pleasure points, and also you have $50 dollars in your pocket to buy 5 more bottles of wine (so you actually get 6X pleasure points; unless the admiration you get from peers has some combination on the spectrum between being worth $50 or is 5 times more pleasure points, you're better off with no taste in wine.
Expanding this approach to happiness-maximizing morality in general, certainly it's the rare human whose moral intuition drives them toward hedonic excess at the expense of all other values. However, perhaps we're still laboring under bad and unquestioned moral assumptions which are after all not innate to human beings, and in fact it should be our goal to identify and do away with all values, beliefs and behaviors that get in the way of optimum utility. For example, most of us recoil at the suggestion that empathy should be eliminated because it conflicts with the pursuit of utility, but perhaps our outrage at such a suggestion is an example of a bad moral assumption. The anti-ascetic of the future will gladly pluck out (for example) the brain circuits that create a desire to care for his/her offspring, as clear offenders to the unflinching goal of increasing happiness.
Resolution #2: Heroin and orgasms aren't the only things that bring about happiness. There are certainly multiple types of experiences that lead to happiness beyond that of physical pleasure; again, if morality is really and only about happiness, the goal should be to identify those types of experiences, the realization of which conflict with each other, and destroy the desire/capacity for conflicting experiences which are in the minority. And presumably a simulation could provide not just heroin rushes and orgies with supermodels, but all the higher hedonic forms, up to and including a sense of meaning: professional achievements, family ceremonies, etc.
Resolution #3: It's the capacity for current and future happiness that matters. Neurologically gaming the system leads to an organism vulnerable to predation and disease; is ten years in heroin-simulation land until your body dies of dehydration better than sixty years in the real world? If asking to be hooked up to a pleasure simulation and left there until you die is wrong, why? This resolution is suspect because it usually conveys some degree of knee-jerk disgust for the incapacitated agent who has diminished their contact with reality in exchange for the equivalent of neurochemical masturbation. Again, see #1 above: this disgust for voluntarily putting oneself in such a passive position is certainly an obstacle to realizing a life of greater pleasure. Also test claimants of this resolution by asking: what if the simulation were built by aliens who guarded it and made absolutely sure the deteriorating, drooling simulation zombies inside the simulation (you!) were 100% safe - i.e., capacity for future happiness is now not a problem. Do you still have a problem with giving yourself over to such a simulation? If so, then future happiness capacity is not your real demand.
Resolution #4: Our moral sense is not entirely predicated on happiness. Our behavior is certainly not 100% rational or conscious-principle-guided by any means, so why do we think that our moral sense would different?
The politics of crime-fighting software
4 hours ago