He charges that "PD situations are highly contrived and ignore many real-world facts that are morally relevant..."
Well, yes. The analysis of a simple game is not the analysis of a complex game. Game theory is a rich and varied field with volumes of serious academic work. As a philosopher, though, I'm entitled to look at how the essential features of some simple games illuminate our fundamental understanding of ethics.
Alonzo might as well argue against the "oversimplification" of gravitation. It is certainly the case that just m1*m2 / d2 ignores many complications in actually describing the motions of planets. But without a sound fundamental understanding of various ideal cases, we can't get very far in making sense of complex phenomena.
The specific cases Alonzo gives, "variable pay-offs, the possibility of anonymous defection, the possibility of deception, and the possibility of affecting desires," all (except perhaps the last, which I shall address in a moment) easily handled by more complicated game-theoretical analysis. But these more complicated analyses don't change anything about the fundamental way we interpret game theoretic analysis in an ethical sense.
He asks, "What happens if we raise our children so that they simply acquire a desire for cooperation or an aversion to defection?"
That seems like an overly simplistic strategy, leading to a susceptibility to exploitation. More importantly, even if children were infinitely labile (which they're not) it begs the question: why should we raise our children thus? Why is cooperation better than defection?
Should we not give our children a sound theoretical understanding of what's going on, so they can analyze and respond to complicated situations where simplistic strategies will not suffice?
To a certain extent, as Alonzo mentioned earlier, we can indeed change desires, to a certain extent. But which desires should we change? Why? How can we justify making those changes? These are all questions that game theory and meta-game theory attempt to answer in a consistent manner.
His next objection,
If we look at your original account from Wednesday's post, and raise children so they assign 2 units of value to cooperation itself, then the value of cooperation increases from 3.3 to 5.3,and exceeds the value of defection. We solve the same problem without any of the complexities of game theory.is difficult to understand. Alonzo has to use game theory to perform this analysis and reach the conclusion that we should change the game by raising children in a particular way. He has employed game theory, not avoided it.
Greetings:
ReplyDeleteThe first part of your response sounds like a person trying to defend the Ptolomy theory of a geocentric solar system by saying, "All of these complexities in the motions of the planets that the heliocentric system seems to answer - they can all be answered by applying increasinly complex epicycles upon epicycles to the geocentric system."
The point is, I don't need all of that complexity.
Just promote a desire for cooperation (or an aversion to defection), and you can throw out the complex fomulae of tit-for-tat strategies and reiterated dilemmas and all of the mathematical modeling that would have to be done to incorporate anonymous defection, deception, unequal payoffs, and the like.
Why pick the more complex answer over the simpler answer?
And the theory does not require 'infinite lability'. It only requires enough lability to deal with the bulk of the day-to-day problems that might arise.
Yet, clearly, 'doing the right thing' can be made so valuable to people that they can sacrifice their life for a principle. So, though infinite lability is not required, we have a great deal of lability to work with.
Which desires should we change?
Let's start with simply promoting a desire to cooperate and an aversion to defection. That will have an immediate effect on these types of 'problems'.
As for your charge that I am using game theory rather than avoiding it - that is a semantic dispute. The solution avoids the prisoner's dilemma because, as in the sample case that I described, with a sufficiently high value added to cooperation, it eliminates the dilemma aspect. It makes cooperation both individually and jointly optimal, so there is no 'dilemma' any more. Cooperation becomes the absolute best option.
If you want to define this as being within game theory, then that is fine with me. But it is a wholly different (and far easier) solution to that which you find by looking at iterated prisoners' dilemmas.
In fact, it is a solution that even works on non-iterated prisoner's dilemmas or iterated dilemmas of fixed and known length.
I would also like to direct you to my answer to another part of game theory, The Ultimatum Game.
And I have discussed the above issues in my own blog early last year in a posting Game Theory and Morals.
By the way, I am not saying that game theory is not interesting or that it has nothing to say about rationality. I am only denying that it has much applicability to morality.