This is also posted at my religion and morality blog The Lucky Atheist.
Sam Harris has in several places discussed his moral thought experiment of a pill that would make prisoners think they'd been tortured when they really hadn't. This is why a recent piece in Aeon about length of punishment was so interesting. First it asks if it's ethical to extend someone's life so they could serve a thousand year sentence; then it asks if it would be ethical to change their consciousness so they thought they had served a thousand years, or least much longer than they actually had. (Star Trek fans will recognize an episode of Deep Space 9 here, where exactly this happened.)
More broadly, the moral issue they're getting at here is whether some kind of hell is permissible or even to be encouraged, once we can create it. (Again science fiction has been there: Iain Banks readers will recognize the central question of Surface Detail, where virtual reality hells are used to punish brutal dictators.) It would also be interesting to ask theists about this. Assuming a situation where everyone (theist and atheist) agrees that a particular person's actions are suitably heinous, shouldn't we begin their punishment as soon as possible? If Hell is a real place that bad people go to, why wait?
Leaving aside the more mundane questions that we can ask of our prison right now - who wants to feed a bad guy for a thousand years, and does knowledge of a thousand year sentence actually deter behavior (utility calculations don't seem to affect bank robbery rates, for example) - I think it's obvious that we should focus on how to prevent terrible behavior in the first place, instead of how to make punishment after the fact more severe. Instead of creating virtual hells, shouldn't we focus on creating a real heaven? If you have the science to make someone think they're living a thousand years, what about the science to keep them from becoming a serial killer in the first place? Here people might object that if we choose that path, surely there's something scary about tampering with human nature and free will, to which I answer: you better have a very, very clear argument then. If we weigh the very concrete suffering experienced by Ted Bundy's victims and their families against this amorphous threat to human nature, I think engineering better behavior wins hands down.
In a way, up until now we've been fortunate, because this hasn't been a choice that it's within our power to make. But once it's within our grasp, if we don't face the facts head on and make an explicit decision based on our values - if we muddle through pretending we still don't have that power, or we just let inertia be our guide - then we're being profoundly immoral. Maybe not this particular decision, but many decisions like it, are going to have to be made already in this century.
Exploring the petertodd / Leilan duality in GPT-2 and GPT-J
28 minutes ago
No comments:
Post a Comment