Sunday, January 12, 2014

Utilitarianism in one easy lesson

Several readers, many considerably smarter than I am, have complained that many of my recent posts are too boringly technical. I like being boringly technical, a legacy of (or perhaps cause of) my career as a computer programmer. This blog is for myself, to think out loud, but it's also to communicate. So I want to try to explain Utilitarianism in a somewhat less technical way. If anyone has any technical questions, feel free to dig into them in the comments. I hope (probably vainly) that this disclaimer will reduce the sort of you-didn't-address-this-technical-issue-you-moron responses I get when I talk about philosophy.

Utilitarianism is an ethical system that says an individual should choose what will maximize the well-being (utility) of everyone (including herself) in the world, over time, keeping in mind that we have a scary level of uncertainty about how to do so. The rest is commentary.

Like any other ethical system, there is no absolutely compelling reason why one should or must adopt Utilitarianism: people choose to adopt Utilitarianism. (They can also choose to lie about adopting Utilitiarianism; that some non-Utilitarians lie and say they are Utilitarians is one argument that Utilitarianism seems like a good idea.)

Utilitarianism starts from the scientific psychological theory that individuals try to maximize their own individual subjective utility. They choose what they evaluate will maximize their own "happiness" and minimize their own "suffering." This theory is distinct from "sphexishness," the theory that people just act out algorithmic scripts and make no evaluations about the outcome. Psychological utility is a theory; we cannot directly measure utility, so psychological utility uses a "hidden variable" to explain observed human behavior.

Utilitarianism then builds on the scientific psychological theory that human beings have evolved the faculties of sympathy and empathy. We can, to no small extent, understand the happiness and suffering of other people (and animals), and we ourselves feel happy at others' happiness and suffering at others' suffering (or, sometimes, the other way around). We don't consciously choose to be empathic; our empathy is more-or-less directly connected to our emotions. We do, however, scientifically observe that this connection is, for many, fragile: a person can be persuaded or socialized to break the connection between others' happiness and suffering and his own.

Utilitarianism is very much a real-world system. Hence, Utilitarians don't worry much about philosophical pseudo-problems such as Omelas, the Trolley Problem, or the Lifeboat scenario. Would Klingons be Utilitarians? Vulcans? Groachi? All of these scenarios wave away important real-world constraints, especially uncertainty, that have shaped human moral intuition. I can't, for example, even imagine a world where the suffering of a child could have any beneficial effect whatsoever, much less be necessary for a utopian society; I really don't care that Utilitarianism in an imaginary world with inconceivably different physics conflicts with my moral intuitions shaped by this world. And how often do people find themselves trapped on an overcrowded lifeboat? Is the most important task for our ethical philosophy to give the right answer in such a situation? Don't get me wrong: a lot of people (myself included) find all of these philosophical investigations interesting (and digging down and getting technical, I think Utilitarianism can either give a good account for itself or at least a good account of why these scenarios assume away critical foundations of utilitarianism) and thought provoking; they just not very good for helping us shape our choices in this world, in this society.

Instead, Utilitarianism focuses on answering the ethical problems that face real people in real societies. Who should I vote for? What laws and public policies should I support? What rules should the police operate under? When and how much should I conserve water, gasoline, CO2 emissions? When should I obey the law and when should I break it? Should I shop at Wal-Mart, Target, Whole Foods, Trader Joe's, or at the bodega?

Utilitarianism has, in my opinion, one good answer to these questions: do the best you can to maximize the utility of everyone, including yourself, assuming that (most) everyone does the same. If you cannot be assured that most everyone really will do the same, you have a bigger problem than the immediate choice.

I want to address some typical objections to Utilitarianism that are not just pseudo-problems.

The first objection is that Utilitarianism requires perfect knowlege (which is different from the pseudo-problematic objection that Utilitarianism would fail if we had perfect knowledge). This objection fails because it assumes that without perfect knowledge, we can say nothing at all about the consequences of our actions on the well-being of other people. This assumption contradicts our everyday experience. I don't know all the consequences of going on a killing spree — who knows, I might kill someone who would otherwise become the next Hitler — but I can be pretty confident that I will be harming a lot of people, and their suffering will far exceed the pleasure I might get from such unbridled violence. (I suppose I really need to say it: this is a thought experiment. I would not actually get any pleasure whatsoever from killing anyone.)

Utilitarianism deals with uncertainty in three ways. First, if it isn't broken, don't fix it: If you don't have good reason to believe that changing the status quo will lead to more well-being, then don't change it. Second, talk about it: have a social conversation about what other people really want and how best to get it. It's not always true, but it often is true that all of us are smarter than any one of us. Finally, learn from history: we have spent millennia (and even the earliest written records show that people had been talking about morality, law, and politics in sophisticated ways) discussing what the greater good consists of and how to establish and maintain a society that provides it. We are not, as Burke suggests, bound by history, but we ignore it at our peril.

The second objection is that Utilitarianism entails that no one individual, not even one's self, is special or privileged. It seems to follow then that I should kill myself, because my organs could save at least seven people (two kidneys, two lungs, a heart, a liver, and a pancreas, not to mention my skin, corneas, veins and arteries, and bones). Seven lives are better than one, n'est pas. However, the "no one is special" principle cuts both ways: it entails the principle of universiality: I should sacrifice my own utility only if I believe that well-being would be maximized if everyone sacrificed the same utility. Otherwise, I am singling myself out to be special.

The third objection, related to the second, is that Utilitarianism requires that we put the well-being of slackers, racists, assholes, and criminals on an equal basis to the well-being of hard-working, egalitarian, nice, law-abiding civilized citizens. This objection is sort of accurate: we are obligated to treat everyone's utility "equally," but only in an abstract sense. Most of our intutions deprecating others' utility turn out, on abstract philosophical investigation, to be Utilitarian. Even if, as a thought exercise, we don't make a prior commitment that the happiness of the rapist just doesn't count, we find that the rapist's happiness is far outweighed by the victim's suffering. In the rest of the cases, we really should change our intuitions: when, as a thought experiment, we don't make a prior commitment that the happiness of a homosexual doesn't count, we find that others' disgust of his sexuality is far outweighed by his happiness at having a same-sex relationship.

The fourth objection is that Utilitarianism is not rights-respecting. Rights-respecting ethical systems state that there are things we should not do, rights we should not violate, even if we have good reason to believe that violating a right would improve overall well-being. In one sense, this objection is just the trivial statement that different ethical systems are different. However, the argument for extra-Utilitarian rights usually relies on wildly unrealistic hypotheticals: Omelas, for example, is wrong in a rights-respecting system because regardless of utility, they are violating the rights of the tortured child. But as I discussed above, it's not really worthwhile talking about the morality of situations that we have good reason to believe cannot ever happen in the future.

The other arguments for rights usually consist of defining utility too narrowly. It might seem that randomly killing people for their organs passes the Utilitarian "no one is special" test (since, hypothetically, each person has an equal probability of being harvested), but this hypothetical construes utility too narrowly. We don't implement such a system because most people would prefer (find greater utility) to take their chances with voluntary organ donation than risk being harvested. Similarly, sacrificing an innocent individual to appease an angry mob seems to have short-term utility, but we have good reason to believe that in the long term, such an action would substantially diminish the well-being of people in a society, who couldn't trust fair and impartial (or as fair and impartial as we can manage) justice.

(There are other, more technical, objections to rights-respecting ethics, which I can address in the comments. See Robert Nozick's Anarchism, State, and Utopia, for a catalog of errors a sincere rights-respecting ethical philosopher can make.)

The final objection I want to discuss is that Utilitarianism is hard. Yep, 'tis indeed. We don't know what other people really want. We don't know who is or is not a sincere Utilitarian. We don't know all the consequences of our actions. The best I can say about Utilitarianism is that it gives us a framework for each of us, individually and collectively, to do the best we can with what we have, and it rests on a foundation that is scientifically sound. If you want easy answers, join a religion (or become a Libertarian). All I can say about Libertarianism is, "Are you fucking kidding me?" And all I can say about religion is that either god is a shitty Utilitarian (and if so, fuck him/her/it/them), or he/she/it/they want us to work this stuff out for ourselves, which is what I'm trying to do.

No comments:

Post a Comment

Please pick a handle or moniker for your comment. It's much easier to address someone by a name or pseudonym than simply "hey you". I have the option of requiring a "hard" identity, but I don't want to turn that on... yet.

With few exceptions, I will not respond or reply to anonymous comments, and I may delete them. I keep a copy of all comments; if you want the text of your comment to repost with something vaguely resembling an identity, email me.

No spam, pr0n, commercial advertising, insanity, lies, repetition or off-topic comments. Creationists, Global Warming deniers, anti-vaxers, Randians, and Libertarians are automatically presumed to be idiots; Christians and Muslims might get the benefit of the doubt, if I'm in a good mood.

See the Debate Flowchart for some basic rules.

Sourced factual corrections are always published and acknowledged.

I will respond or not respond to comments as the mood takes me. See my latest comment policy for details. I am not a pseudonomous-American: my real name is Larry.

Comments may be moderated from time to time. When I do moderate comments, anonymous comments are far more likely to be rejected.

I've already answered some typical comments.

I have jqMath enabled for the blog. If you have a dollar sign (\$) in your comment, put a \\ in front of it: \\\$, unless you want to include a formula in your comment.

Note: Only a member of this blog may post a comment.