Friday, October 27, 2017

If You Take Parfit Seriously, You Should Commit Yourself To Creating Superintelligence

Cross posted at Speculative Nonfiction and Cognition and Evolution.

Derek Parfit makes the argument that if utilitarianism as it is commonly understood is to be taken to its conclusion - the greatest good for the greatest number - that mathematically we should care not just about making individuals happy, but making more individuals, to be happy. If you can have a world of a billion people all just as happy as a world of a million people, then that that's a no brainer.

The problem is when you get to the math of it. The "repugnant conclusion" that if the total amount of happiness is what matters, then you should favor numbers over quality of life. That is, a world of a hundred billion people with lives just barely worth living is better than a world of a hundred people with great lives - because the great lives are probably not a billion times greater than those of the hundred billion in almost total misery.

The obvious objection is that you're talking about theoretical people when you talk about those hundred billion. The counterargument is that we do care about theoretical people - our descendants - and you might already make environmental decisions to preserve the environment for the happiness of your grandchildren; right now you avoid (hopefully) littering the street to avoid upsetting people you've never met and will probably never meet.

There are other objections of course; for instance, that experienced happiness in an individual is what matters; otherwise slave plantations could be (in fact, probably are) morally acceptable.

But following Parfit's repugnant conclusion to its end, if the total amount of utility is what matters, then increasing the amount of utility possible to be experienced also matters. That is to say, there is no reason to stop at considering theoretical people, but rather we should consider theoretical kinds of experience, and theoretical kinds of experiencers. And there is nothing in Parfit's thesis provincial to or chauvinistic about humans. (If there were, that might solve the problem, because you could say "the closer something is related to me, the more I should be concerned with its happiness" - me and my brother against my cousin, et cetera - which, at very close genetic distances, is in fact what most humans already do.)

Therefore, we should try to make a world of a hundred million bipolar (manic) people who can experience hedonic value far in excess of what most of us ever do (assuming we can keep them manic and not depressed.) Or, even better, created an artificial superintelligence capable of experiencing these states, and not devoting all our resources to creating as many copies of it as possible. But cast aside those constraints - if you believe it is possible for a self-modifying general artificial intelligence with consciousness (and pleasure) to exist, then by Parfit, the only moral act is to give up all your recreation and resources to live in misery and dedicate your life to the single-minded pursuit of getting us one second closer to the creation of this superintelligence. The total suffering and happiness of life on Earth up until the moment of the singularity would quickly shrink to a rounding error, compared to the higher states these replicating conscious superintelligences might experience. Therefore, if you are not already singlemindedly dedicating yourself to bringing such a superintelligence to life, you are forestalling seconds of these agents' pleasurable experiences (which far offset your own suffering and maybe those of all living things) and you are committing the most immoral act possible.

This problem is superficially similar to Roko's basilisk (in the sense of your actions being changed by knowledge of a possible superintelligence) but I think it should still be called Caton's basilisk.

As a result of these objections, I do not think we need to take the repugnant conclusion seriously, and I do not think not dedicating yourself to creating a super-hedonic superintelligence is immoral.

No comments: