Before I continue, I want to briefly describe utilitarianism. Utilitarianism first rests on four more-or-less scientific principles:
- People directly experience "happiness" and "suffering"; they're hard to define precisely, but we know them when we feel them. Happiness is intrinsically good, and suffering is intrinsically bad.
- People are goal-seeking, which differs from other possible high-level cognitive strategies such as rule-following; (what Daniel Dennett calls sphexishness).
- People generally create and act on the goal of maximizing their own happiness and minimizing their own suffering.
- People have evolved to be social, and we have evolved the tendency to feel our own happiness when (some) others are happy, and feel our own suffering when (some) others suffer.
None of these scientific principles entail any particular ethical system. Utilitarianism thus must add an ethical ideology to these principles. There are two fundamental principles:
- Ideally, we should act so as to maximize happiness and minimize suffering, aggregated over every person, to the end of time.
- No individual's happiness and suffering is a priori more or less important than anyone else's; there is no privilege or oppression
Thus, utilitarianism is by definition consequential (we always look at the consequences of an action on happiness and suffering), and universal.
The second principle is important: it rebuts the objection that utilitarianism requires that each person sacrifices his life so that his organs could save the lives of more than one other. However, if everyone did so, we would obviously not maximize happiness; if only some did so, those some would be oppressed. If we drew lots, we would have to evaluate the total effect: does drawing lots to sacrifice healthy individuals for organ transplants increase or decrease happiness and suffering vs. letting individuals take their chances. When the answer in certain circumstances is that drawing lots does increase overall happiness and/or decrease overall happiness, such as a draft lottery in wartime, then we actually do it.
Ideally, we want to maximize happiness and suffering for everyone to the end of time. Obviously, we cannot actually determine the effect on any action for the 7 billion people presently existing, and however many will exist until the end of time. Thus, we operate under both risk and uncertainty, i.e. known and unknown probabilistic distribution of possible outcomes, respectively. TK has an excellent and succinct explanation in his post on the effects of risk and uncertainty on utilitarianism, so I will simply quote:
One appealing resolution is to say that these two problems solve each other. It is true that a naive utilitarianism does not account for uncertainty. But when we do account for uncertainty, then we will reproduce most of our moral intuitions. Although perhaps we will not reproduce every moral intuition, and so this provides a useful way to distinguish between intuitions which are correct and intuitions which are incorrect.
For example, whenever we make decisions, we are more certain of the consequences to ourselves than we are of consequences to people far away. In the face of uncertainty, consequences tend to be a wash, so it is good to prioritize ourselves.
Another example. Whenever we drop a brick off the roof of a building, we cannot distinguish beforehand the cases where the brick will hit someone and the cases where it won't do anything. Therefore, we must judge all brick-dropping the same way. We must make a rule against the action of dropping bricks in random places. This reproduces deontological ethics, which makes rules about particular actions based on the qualities of those actions.
This also neatly solves one of the problems with deontological ethics, which is that there isn't a clear way to generate new rules about actions. This framework suggests that the correct way to generate new rules is to consider the probabilistic consequences of a class of actions.
TK's main point is to note some differences between physics and ethics.
First, he asserts that both physics and utilitarianism are reductionist, i.e. you can (in theory) calculate higher-order phenomena from the behavior lower-order phenomena. However, physics purports to describe how the universe actually is, whereas TK is unclear on whether utilitarianism even purports (much less actually does) describe how the world is.
This is an easy one. Utilitarianism does not describe how the world actually is. It is a framework that people chooses or does not choose to evaluate their actions. The "reductionism" just happens to be part of the theory; as a proponent of utilitarianism, I would not say that utilitarianism is true because it is reductionist. (Indeed, I would not say that utilitarianism is true. Full stop.) Reductionism just serves to make the theory easier to use.
The idea, however, that we know physics is true, that it really describes the world, because it is reductionist is very philosophically problematic. There's no denying that reductionism is a really useful tool in physics, but the connection between reductionism and truth seems very hard to justify. So I don't known that physics and utilitarianism are really very different on this criterion.
Second, TK notes that increasing the precision of our moral calculations do not just allow us to know the moral status of actions with more precision, it can actually radically change the moral status. TK's example is particularly trenchant:
If we discover with certainty that dropping a brick at a particular time won't hurt anyone, and will instead kill a butterfly and stop a hurricane in a hundred years, then that action literally goes from unacceptable to acceptable.Indeed: and not just acceptable, but compulsory.
But again, is this sensitivity to precision all that different from physics? Do we not have Chaos theory and Three-body problem? I don't actually see much difference here between physics and utilitarianism.
Finally, TK makes a legitimate argument from ignorance: he doesn't know how utilitarianism reproduces such basic, intuitive things as rights, so he cannot effectively use it. I have two responses. First, TK could in fact become an expert in utilitarianism: he's a smart guy, and I think I've made at least the prima facie case that utilitarianism is worthy of study. However, I suspect that the world is better off if TK puts more of his time and effort into the study of physics and as an advocate for social justice. So my second response is that most people's naive intuition about how to act is already utilitarian: you don't have to be an expert in utilitarianism to act as a good utilitarian. Most "nice" people usually act on the following rules/guidelines:
- If I can clearly benefit myself without hurting anyone else, then I should.
- If I can clearly benefit someone else without hurting myself (much), then I should.
- If I can clearly prevent great suffering, even if it harms me, or risks harming me substantially, I should.
- Most social rules handle the general cases where the outcome is not immediately clear: I should act according to social rules in uncertain situations.
- If a social rule seems like it causes more harm than benefit, I should subject that social rule to heightened scrutiny, and consider changing it.
Really, that's 99.999% of utilitarianism. The 0.001% comes in how we should examine apparently problematic social rules. Even then, the "experts", who have the training to look deeply into that 0.001%, have the task of justifying their analysis to the rest of the "lay" population.
I will try to write a more detailed analysis of how rights come from utilitarianism.
A great response to a great post
ReplyDelete