Monday, April 14, 2014

Finance and poker

I used to play poker. I don't much play anymore, mostly because I don't have any time or money, but also because people playing poker at casinos are no longer unskilled enough to for me to consistently make money.

Poker is a curious game. It is reasonably well-understood that in theory, "optimal" play in poker guarantees that (absent a rake) you will always break even in the long run, regardless of how well or poorly your opponent plays. Thus, to win at poker, you have to figure out how your opponent is playing non-optimally, and play in the corresponding non-optimal way that will give you positive odds. For example, many inexperienced players play too loosely and passively — they play too many hands that have negative odds, and they don't raise enough when they have positive odds. To take advantage of them, an experienced player will play tightly and aggressively, playing only hands with positive odds and almost always then raising.* Note that the "expert" is playing sub-optimally: "optimal" play requires playing marginally positive hands passively, and bluffing some hands with negative odds. But playing that way will simply break even in the long run, even against sub-optimal play. If the other players understand her tight-aggressive strategy, they will simply fold with mediocre hands whenever she raises.

*One fictional trope that I find amusingly unrealistic is the depiction of the "expert" player as one who can bluff his opponents into folding with better hands. In reality, the real expert will make a lot more money by convincing her opponents to call with worse hands; her bluffs are calculated to fail, to convince opponents to call more. Hence, she will usually show her opponents her (infrequent) successful bluffs and hide her winning hands (and hint they were bluffs).

Another thing that happens all too often in reality is experienced players getting upset with new players for playing poorly. Stupid! You want to
encourage poor play and take advantage of it. Poker is not a game of skill in calculating odds; it is a game of psychological observation and manipulation. If you can't manipulate a newbie, you are not even an intermediate player, much less an expert.

There is one situation in poker when playing correctly has positive odds: when you are no-limit heads-up (you can bet any amount you want, up to your total, and you playing against one opponent), and you have at least double the amount of money he has. Then you just go more or less all in on every hand*, counting on the fact that your opponent has to beat you twice, whereas you have to beat him only once. If the blind (forced opening bet) is large, and it is usually very large near the end of a tournament when the last two players are fighting for first place, your opponent can't wait for a good hand to call you.

*IIRC, the exactly correct strategy is to bet so that if you do lose, you equalize your and your opponent's stacks after the next blind.

If you go to a poker tournament, and you play the optimal strategy, you will lose the tournament. On average, you will neither win nor lose, but there will be a player who figures out the sub-optimal play of other players and takes advantage of them (or who plays sub-optimally, with fatter tails*, and gets lucky), and then beats you by brute force at the end.

*i.e. hey have a higher probability of either going broke quickly or getting rich quickly, with a lower probability of breaking even.

Finance is the same way. The "optimal" strategy is to create a portfolio such that no matter what happens in the economy, your investment earns, in the long run, the same rate as general economic growth (1-4% per year). You'll never* lose with this strategy, but neither can you win: you will never* become relatively wealthier than someone who plays a sub-optimal strategy and either outsmarts other investors, or who just gets lucky. And when any other player gets substantially more than you, he can beat you down by brute force simply by being irrational longer than you can stay solvent.

*Well, rarely; even an optimal strategy has a little room on the tails.

One charming bit of naivete I see in economists is the idea that economics is fundamentally about the optimal allocation of resources to maximize social production. In some abstract theoretical universe this might be true, but in reality, economics is about power; it is about winning. Hence, people, especially people with a lot of money, are not trying to not lose, they are trying to win. They are trying to defeat their opponents. Hence, they cannot play conservatively, i.e. not to lose; even if their strategy is just naively sub-optimal, there are enough other people that some of them will get lucky and win big. The "conservative" strategy cannot (rarely) win big; the whole point is to balance big losers against big winners, and small losers against small winners. This asymmetry becomes even more pronounced in finance, because the big winners get to actually change the rules. Most notably, the rich socialize their own losses and privatize their gains. This tendency causes the financial market to crash (since the rich have no downside risk, they can make bets with negative long-term odds but the potential for short-term gains). Everyone, even the "conservative" investor, loses everything (or at least all his gains), and then state makes up the losses of the rich. ("Sure got a nice economy there. Be a shame if it were to burn down.")

The typical capitalist apologetic for this system is that it promotes innovation. The apologetic is half right. Unlike poker, real life has not only risk, but uncertainty. We have to make wildly speculative bets to create fundamentally new things. Everyone laughs at Pets.com, but whoda thunk that a search engine and a discount bookstore would be the primary drivers of internet technology, productivity, and economic growth? The probability of any individual speculation paying off is so low, according to the apologetics, that the payoff must be correspondingly large, and that we cannot punish failure by economic "death." Otherwise, no one would ever take speculative bets, and we would have no (or very little) innovation.

However, there are two flaws in the capitalist apologia. First, the reward for winning speculative bets is not increased consumption (Bill and Melinda Gates and their family could not and have no intention of actually themselves consuming $50 billion of goods and services), but political power: the power to tell people (i.e. the workers) what to do and not to do.* There is no need to reward successful innovators with political power; there are plenty of alternatives, such as social status.

*It you think that power is actually held by our elected representatives in the official government, you are hopelessly naive.

Either people "naturally" (i.e. without special, artificially constructed incentives) want to be innovative, or they do not. (More generally, they might or might not want the fruits of innovation more than they dislike the process of innovation.) If we do not naturally want to be innovative, why should we as a society encourage such behavior? If we naturally do want to be innovative, then instead of creating powerful positive incentives, it is sufficient to only remove negative incentives: do not punish people for trying to innovate and failing.

(The other apologia for capitalism is that most people are stupid, lazy, and irrational; they must be ruled for their own good. "Democracy" is at worst a sham and at best just a check on the most egregious corruption of the elite. As a democrat, I reject this premise, for what I think are good reasons.)

That's why I am coming to believe that finance should be entirely public, run by the government. The government can afford to play conservatively, i.e. to play to not lose. Most of our economy, the economy of food, clothes, houses, electricity, water, cars, gasoline, etc., is a game we want to play to not lose. For the rest, encourage small-scale innovation by removing the negative incentive of losing a year's pay trying to innovate: give everyone a free year to try something innovative (a person would have to prove only that she is not going to sit around for a year watching TV); if they're successful, give them some publicity and another year or two to continue being innovative. If they're unsuccessful, they've lost nothing. For the "big bets," innovations that are beyond the scope of the individual or small group, we should vote; why should a private person individually decide how to innovate with the labor of thousands or millions?

Such a society might not be as innovative as full-throttle innovate-or-die capitalism, but I think it will still have substantial innovation and would definitely be a happier society for everyone.

Friday, April 11, 2014

Belief, disbelief, and/or lack of belief

I saw "What Atheism Really Means" by Mike Dobbins when it came out last month. I chuckled and moved on because Dobbins makes a pointless and irrelevant distinction. But then 3quarksdaily picked it up, so I suppose the editors there are as ignorant as Dobbins about basic philosophy. In his article, Dobbins argues that the definition of atheism as lack of a [positive] belief in God is insufficient, and argues that the stronger definition as disbelief in God is more appropriate. However, Dobbins' objection is irrelevant, because it ignores or conflates different social contexts where various definitions of atheism operate: prosaic, philosophical, and political.

In an prosaic social context, I am happy to use Dobbins' stronger definition: I definitely say that I disbelieve in the existence of god. In this context, I am using the social definition of "god": the sort of being that characters such as Yahweh, Jesus*, Allah, Krishna, the Buddha*, Ngai, etc. purportedly represent. None of these entities actually exist; I believe that these characters are fictional on the basis of evidence and reason. I might be mistaken, of course, but I definitely do believe, and I would rationally defend that belief, that they do not actually exist. In a prosaic context, I agree with Dobbins: the facts warrant a statement of definite disbelief.

*To the extent that explicitly deistic attributes are essential to these characters. In a similar sense, the character of Abraham Lincoln in Benjamin P. Thomas's Abraham Lincoln: A Biography represents a real person, whereas the character of Abraham Lincoln (Benjamin Walker) in the film, Abraham Lincoln: Vampire Hunter is fictional.

However, things get a lot more complicated when philosophers consider an idea. Many atheists, myself included, have studied a considerable amount of philosophy, and there are many philosophers who have examined and defended atheism at the highest professional academic level. In a philosophical context, the precise meaning of words becomes critically important; the unqualified word, "god," becomes unacceptably ambiguous. The sense noted above, beings like Yahweh, etc., i.e. beings with personality, desires, preferences, and who intervene in the physical world to effect their will, is only one sense. There is also the deistic god, a god who sets the world in motion with a set of physical laws and then does not intervene further. This sort of god is not so much disbelieved as dismissed. While it would be nice to know, even if such a god existed, it would have so little impact on my daily life that in the absence of any evidence (even if such evidence could be adduced) deciding one way or the other is a waste of time. Finally, there are the "gods" of Sophisticated Theology™. For example, Jerry Coyne (who reads Sophisticated Theology™ so I don't have to), quotes David Bentley Hart's book, The Experience of God: Being, Consciousness, Bliss:
To speak of “God” properly, then . . . is to speak of the one infinite source of all that is: eternal, omniscient, omnipotent, uncreated, uncaused, perfectly transcendent of all things and for that very reason absolutely immanent to all things. God so understood is not something poised over against the universe, in addition to it, nor is he the universe itself. He is not a “being,” at least not in the way that a tree, a shoemaker, or a god is a being; he is not one more object in the inventory of things that are, or any sort of discrete object at all.
It seems clear that Hart's definition of god is not the sort of... concept?... that I can have any belief one way or another regarding existence. To be philosophically rigorous, the stronger, definite statement of disbelief is too narrow to encompass all these different definitions of "god"; the broader, and admittedly weaker, definition of "atheism" as a lack of positive belief succinctly covers all these cases.

In addition to ambiguities in the meaning of "god," there are also ambiguities and subtleties in the word "believe." In a philosophical sense, a person can believe or disbelieve only propositions, i.e. statements that can coherently be either true or false. (Philosopher Theodore Drange explores this concept in some depth in his 1998 article, "Atheism, Agnosticism, Noncognitivism.") If "God exists" is a proposition, then I can definitely disbelieve it. However, if "God exists" is not a proposition, as Hart seems to claim, then I can neither believe nor disbelieve it. In a similar sense, I can neither believe nor disbelieve the statement, "Colorless green ideas sleep furiously," nor can I believe or disbelieve emotitive sentences such as "Yay!" or "Boo hoo!" Again, confronted with a vast range of ways that theists present the propositional status of "God exists," I can be both precise and compact only by asserting that I lack a positive belief about the existence of God.

In addition to senses of god that are not propositions, there are senses that are propositional but cannot be known. To illustrate this principle, consider the statement, "There is [present tense] a ninja hiding in the room." First, this statement is hard to prove: ninjas are, by definition, far more skilled at hiding than I am at detecting them. More importantly, though, even if I discover a ninja in the room, he or she is ipso facto no longer hiding. Neither discovering nor failing to discover a ninja, therefore, is evidence for or against the proposition. While the statement is propositionally, semantically, and even scientifically unproblematic, it is fundamentally unknowable by definition. While I might be able to come to a definite belief on indirect evidence (it seems unlikely that any ninja would want to hide in my office), if I am going to be rigorous (or if I am considering a statement where indirect evidence is unavailable), I have to simply deny any belief.

Another philosophical subtlety comes from the way that scientific naturalists such as myself view knowledge. First, in the scientific naturalist account, without exception, all knowledge — i.e. all propositional statements about reality — is always provisional. All knowledge is conditioned on evidence, and any individual human as well as all human society, has at any time only a small, finite subset of the very large and possibly infinite body of available evidence, and all knowledge, therefore, is subject to revision given new evidence. Because all knowledge is provisional, it's unnecessary to explicitly condition knowledge statements with provisionality. The sentence, "I believe (or know) that two bodies experience an attraction described by general relativity, which can be closely approximated at low densities as a force proportional to the product of the masses divided by the square of the distance," does not gain any additional meaning by adding provisos noting that further evidence might change my opinion. Because there are no statements about reality that are believed non-provisionally, we don't need to distinguish between provisional and non-provisional beliefs, and the linguistic distinction is dropped as redundant. In my own writing, I try to avoid the word, "certainly," replacing it with "definitely," but my vocabulary was shaped by convention, not scientific rigor, so I occasionally err. To the obtuse or unaware, unconditioned statements about knowledge sometimes appear to be stating facts with certainty rather than definiteness. Thus, even towards conceptions of gods that I disbelieve, I definitely disbelieve, i.e. I have made a decision, but I do not certainly disbelieve.

A more important consideration, however, requires looking a little more deeply into how scientific naturalism works. Because all knowledge is provisional, it is always statistical, at least conceptually. (I have to egregiously simplify here, but I hope to capture an essential feature about scientific knowledge.) In a statistical model, we create a "null hypothesis," which represents a default belief about the world, and an "alternate hypothesis" which represents the negation of the null hypothesis. For example, I might say that the null hypothesis is that the average height of men in the United States equal to 179 cm, and the alternate hypothesis is that the average height is not equal to 179 cm; on average they are either shorter than or taller than 179 cm. Note that the null hypothesis is probably not precisely correct; even if the average height is very near 179 cm, it is probably not exactly 179.000000000 .. 000 cm (we can measure length very precisely). (This imprecision is not really problematic; close enough is close enough, and if I'm designing a car or a house, for instance, I don't need to know the average height to nanometer precision.) In addition to being not precisely correct, the null hypothesis is usually not directly provable, it is only disprovable. If I measure the height of 300* men, and find that their average height (sample mean) is 180 cm, with a standard deviation of 10 cm, then I know with about 95.8% confidence that the average height of all men is not 179 cm. Note that I do not know that the average height of all men is 180 cm; I have "proven" (provisionally) only the alternate hypothesis, which is that the average height is not 179 cm. The best I can say is that I have good evidence for now considering 180 cm to be the new null hypothesis when talking about the height of American men.

At this sample size, the different between the normal and t distributions is negligible.

This method impels a curious terminology that any competent professor of statistics will impress on her students: you say you reject the null hypothesis or you fail to reject the null hypothesis; you do not, on pain of durance vile, ever say you accept the null hypothesis. Similarly, you say you have sufficient evidence to conclude that the alternate hypothesis is true, or you have insufficient evidence to conclude the alternate hypothesis is true; you never say you conclude that the null hypothesis is true. Strictly (very strictly) speaking, therefore, a scientific naturalist never actually believes the currently specified description of the world, a systematic collection of null hypotheses; she believes, instead, that she has insufficient evidence to conclude that the world is different from this current specification.

Note that "insufficient evidence" applies equally to edge cases as well as to non-edge cases. In the above example, if I had measured the height of only 250 men, I would be only 94.3% confident that I can reject the null and conclude the alternate hypothesis (that the average height was not 179 cm) was true. Because by convention I will reject only if I am 95% confident, I will fail to reject the null and conclude that I have insufficient evidence to conclude that the average height is not 179 cm. Similarly, if I find the average height of my sample to be 179.1 cm, I will be only 56.2% confident that the null is false, but I will still just say that I have insufficient evidence. (If I measured 30,000 men, however, a sample average of 179.1 would give me 95.8% confidence to reject the null.)

In practice when we repeatedly test and fail to reject some specification of the world, especially when our failure to reject is not borderline, we have good reason to believe the world really is at least very close to the specification. Still, when pressed, and in ambiguous or uncertain circumstances, scientific naturalists tend to retreat to "insufficient evidence" semantics.

I hope you'll forgive me, gentle reader, when I tell you we atheists really don't care that much anymore about the philosophical subtleties I have wasted so much of your time describing to you. As far as most atheists are concerned, the philosophical and scientific debate is over, decided. No matter what definition of "god" you choose (that is not intentionally metaphorical nor does unacceptable violence to the meaning of the word, "god"), your definition is meaningless, non-propositional, unknowable, or rejected by the evidence. We make a nod to the philosophical subtleties by making the most general statement — we lack a positive belief about god, which includes disbelief in some definitions of "god" — when concision is more important than detail.

Atheism is not primarily a philosophical position; it is a political position. Our position is that all god talk (that is not intentionally metaphorical) is not just nonsense, but pernicious nonsense. Religion is not just a weird thing that some people do in private; it has profoundly negative effects on our societies, cultures, and nations (and what positive effects it might have would be at least as good, and usually better, if the god talk were eliminated). We define atheism broadly not just as a nod to the philosophical subtleties, but also to be as inclusive as possible to people who reject god talk for a variety of reasons, with various degrees of philosophical sophistication. We want to include as an "atheist" someone who is not particularly interested in philosophy, who just doesn't know whether or not Yahweh and Jesus are real, but who finds offensive and absurd, as we do, the notion that, for example, the leader of an organization of supposedly celibate men, an organization that has gone out of its way to protect and defend men they know have raped children, has anything whatsoever legitimate to say about how consenting adults employ their genitals or women employ, or refuse to employ, their uteruses. If you can say only that you lack a positive belief about god, and that people who say they do have any sort of positive belief thereby gain no moral or scientific authority whatsoever, you're one of us.

Monday, March 24, 2014

Gray on Nietzsche, and on becoming "gods"

Although John Gray's review of The Age of Nothing by Peter Watson and Culture and the Death of God by Terry Eagleton, "The ghost at the atheist feast: was Nietzsche right about religion?" begins with an egregious nonsequitur (described below), Gray really doesn't have much to say about modern atheism in this review. Gray first summarizes Watson's book, which Gray claims the book centers the modern debate on secular ethics around Nietzsche's charge that secular ethics, such as Bentham's and Mill's utilitarianism, relied on "theistic concepts and values," and must thereby be rejected. Subsequent ethical philosophy, according to Gray's view of Watson's book, consists in substantial part of answering Nietzsche's challenge. In contrast, Gray reads Eagleton as placing religion outside culture, taking the role of a force to restore a sense of "tragedy" to modern society. Although Gray rejects Eagleton's thesis, objecting that both Christianity and secular revolutionaries such as Lenin are the complete opposite of the ancient Greek notion of tragedy — "a conflict of values that cannot be revoked by any act of will" — he finds Eagleton's book profound in many ways, especially praising Eagleton's description of a "mythologised" Englightenment divorced from modern reality. While both books seem interesting, Gray's connection of these books to modern atheism seems strained and artificial.

As noted above, Gray begins with a nonsequitur: "There can be little doubt that Nietzsche is the most important figure in modern atheism, but you would never know it from reading the current crop of unbelievers, who rarely cite his arguments or even mention him." Perhaps Gray means here that Nietzsche should be the most important figure, but importance would seem to be defined by use; if Nietzsche is, as Gray asserts, widely ignored, then he is ipso facto unimportant. If Gray means something else by importance, however, the case must be made directly, not simply assumed. Furthermore, Gray cites Watson's argument that much of late 19th and early 20th century philosophy, politics, and culture forms a direct engagement with Nietzsche; to the extent that 21st century atheism has abandoned its engagement with Nietzsche, a more nuanced explanation than Gray's facile and dismissive denigration of modern atheists as "loud in their mawkish reverence for humanity, and stridently censorious of any criticism of liberal hopes." Although Gray simply fails to connect Watson's and Eagleton's books to modern atheist thought, Gray at least raises a point worthy of consideration.

As I have written many times before, modern atheism is primarily a social, cultural, and especially political movement. Our aim is to destroy the social privilege to claim any sort of moral authority on any "religious" basis. We oppose religious moral authority on methodological, not consequential, grounds (although obviously negative consequences do form an important critique); thus, we oppose religious moral authority even when that authority demands moral beliefs we find agreeable. Inexorably tied to this political stance is what Gray describes as the "Nietzschean imperative — the need to construct a system of values that does not rely on any form of transcendental belief." This imperative raises three specific challenges. First, is the Nietzschean imperative itself a transcendental belief? Second, does the pervasive liberalism of modern atheists rest on a transcendental belief? Finally, is the Nietzschean imperative untenable? Must morality itself rely on a transcendental basis? Finally, do modern atheists successfully address these challenges?

Even though Gray does not raise the first challenge, and I include it here only for completeness, it is relatively easy to address. First, even if the Nietzschean imperative were transcendental, it is not itself a moral belief; it is a meta-moral belief. In just the same sense, the* definition of science as "conclusions about objective reality logically drawn from observation and experiment" is itself not a scientific statement; it is a meta-scientific statement. It is not a statement about objective reality, it is a statement about how we choose to draw conclusions about reality. Second, the Nietzschean imperative is easily repaired by restating it as a project rather than an imperative: we want to construct a system of values without transcendence. Stated so, it simply becomes a descriptive statement about preferences, without any need to invoke transcendence. The self-referential challenge is therefore not a compelling challenge to the atheist project.

*I use the definite article not to imply that there is only one definition, but simply to refer to the specific definition offered.

The second challenge is more pointed. To a certain extent, it is not terribly relevant; the atheist propensity towards "liberalism" (which term, it must be noted, is extremely vague) may just be an artifact of the general propensity of the population to be liberal, with perhaps a bias against "illiberal" (also a vague term) people superficially denying some sort of transcendence. But denigrating a group because it has only a popular political agenda would seem to privilege only philosophers to have "legitimate" political opinions, which seems anti-democratic and in need of a more direct argument; furthermore, this criticism seems to be rarely applied to groups other than atheists. Atheists are political in the ordinary, prosaic sense that everyone is (or is expected to be) political in a (more-or-less) democratic republic. Big deal.

But Gray makes more direct assertions. First, atheists today "embody precisely the kind of pious freethinker that Nietzsche despised and mocked: loud in their mawkish reverence for humanity, and stridently censorious of any criticism of liberal hopes." It's difficult, however, to see this charge as anything but a gratuitous insult. I'm not a scholar of Nietzsche, but I've read enough to know that Nietzsche's aesthetic standards, while certainly refined, usually have considerably more subtlety. Nietzsche certainly criticizes sentimentality, i.e. misplaced emotion (as I recall, he uses the example of the young bourgeois woman shedding a tear over the plight of a theatrical heroine while her footman freezes while waiting for her outside the theater. But "mawkish" is not "sentimental," at least not in the above sense, and "censorious" simply means strongly critical; if we sincerely believe liberal virtues and hopes to be of value, why should we not be censorious? (And calling atheists strident has become such a banal cliche that I object not as an atheist or philosopher but as tutor of English composition.) Gray not only fails hit the mark in his philosophical critique of atheism; he has still missed the target entirely.

Gray's second criticism is at least relevant. He asserts that atheists must believe (if his invocation of Nietzsche's argument is relevant) that "the world can be made fully intelligible, [emphasis added]" presumably through the application of reason and observation, a belief that Nietzsche holds must be an "article of faith," and not "a premise of rational inquiry." Nietzsche (and Gray) might be correct: the hope for full intelligibility might require faith, but why should the liberal rationality, or any other secular ethical philosophy, require full intelligibility? The position of modern atheists neither requires nor asserts full intelligibility; atheists claim only that rationalism provides some intelligibility, and that whatever "intelligibility" religion might provide is trivial, specious, or insupportable. We do not assert that we have all the answers; we assert merely that religion does not have any good answers we do not already have. So although not entirely off target here, Gray again misses the mark and demolishes a straw man.

However ineptly handled, Gray does raise a point worthy of consideration. Nietzsche is a subtle guy, and I'm no scholar of his work, but I've read enough to have picked out one theme: to be a "god," in Nietzsche's metaphor, is to create moral truth. Adam and Eve (the mythological characters) become human when they know God's moral truth; modern human beings, jointly and severally, become "gods" when we reject God's authority to set moral truth and create our own. And this is the "terrible" truth of atheism: there is no God to constrain, however indirectly, our individual and social moral choices. The only external constraint (other than physical law) on an individual's moral choice is what other people will compel or forbid. And there is no external constraint on our social moral choices; an uncaring and indifferent universe will not compel or guide us to create a "good" society nor forbid or hinder us from creating a "bad" society. What and who we are, in a moral sense, is entirely in our hands.

To a certain extent, I suspect modern atheists take this truth, that we have become Nietzschean "gods," for granted. We argue for liberalism (or, as in my case, various radicalisms) not because we see these visions of society as externally mandated, but simply because we want, and many people around us say they want, such a society. Unlike Nietzsche, we simply accept the responsibility of creating our own society, our own morality; we are not existentially or psychologically crushed or awed, as Nietzsche perhaps was, by the weight of this responsibility. Given that we know (or perhaps just subconsciously take for granted) our social morality is a choice, liberalism is an easy choice: who wouldn't choose a society that promoted the dignity and well-being of everyone? (And even radicals such as me are fundamentally liberals; we do not disagree on ends, only means.) If society is what we choose, let us choose and make it so.

The realization that there are no external constraints on our moral choices, neither divine nor natural, destroys the twenty-five century old fundamental project of philosophy: to discover the external constraints on our moral choices; in essence to replace moral choice with moral truth. Without this project, philosophy becomes trivial: ontology becomes materialism (or physicalism, to the annoying particularists purists), epistemology becomes the scientific method, aesthetics becomes fashion, and politics becomes pragmatism. It takes no philosophy at all to say we simply do as we please, and without the need to justify assertions of moral truth, we need no metaphysical, unscientific ontology to define moral realism and we need no rococo, unscientific epistemology to support it. All we need to do is face the truth that who we are, as individuals, as societies, and as a race, is no more or less than what we choose.

Liberal religious believers must oppose atheism because we undermine their worldview as thoroughly as we undermine the illiberal religious worldview: it is just as unfounded to ground a liberal ethic as an illiberal in a God. Philosophers have to oppose atheism because we undermine their worldview as thoroughly as the religious: it is just as unfounded to ground a liberal worldview in philosophy as it is in God. There is no moral ground. Full stop. There is only choice, which is, by definition, no ground at all.

We atheists are terrified neither by the responsibility nor the license. It is we who must make our world, so we want to make it. We "can" — nothing in the objective reality outside our minds prevents us — do anything; we choose to be kind.

Wednesday, March 19, 2014

A perfect example

Persistent annoyance Occasional commenterMajor Nav pretty much illustrates the irritations I complain about in my recent post, On method:
From Major Nav:
You are fooling yourself to believe communism is a good thing or an achievable end.
To support your argument, you "study capitalism" to seek out examples of where you "believe" it is harmful while ignoring the harmful results of all previous attempts at communism. If anyone calls you on it, you just say "That was the old communism, I'm talking about neocommunism."
And you seek out authors and pick through their writings to select out of the context, an idea that is close to your concept and sprinkle them through your writings. Usually as a reference to the obscure article vs a direct quote. As if anyone else has read the article.

If all you have is a hammer, everything looks like a nail. Put down the hammer once in a while, take a step back and chose another viewpoint.

Let's break this down.

You are fooling yourself to believe communism is a good thing or an achievable end.

This is a "criticism" (actually just a complaint) about my conclusions, not my methodology.

To support your argument, you "study capitalism" . . .

This quotation is a blatant insult. I do not "study capitalism" with scare quotes; I actually do study capitalism, at an accredited university with a moderately prestigious economics department, I get excellent grades, and most every professor I have studied under or worked with — all committed capitalists — has offered to write me a letter of recommendation to any graduate school I wish to apply to. (And this ain't chopped liver: the reputation of an undergraduate program depends almost entirely on the performance of its students in graduate school; no professor will recommend a student he or she believes will fail.)

I'm not offended by the insult; the fact that ignorant tools like Major Nav have to depend on insult rather than reasoned argument shows the weakness, vacuity and dogmatism of their own position.

To support your argument, you "study capitalism" to seek out examples of where you "believe" it is harmful . . .

I have no idea why Major Nav puts "believe" in scare quotes; perhaps he is unfamiliar with the ordinary rules and meaning of English punctuation.

I don't need to study capitalism to discover examples of where it actually is harmful; I just need to read the newspaper. I study capitalism to discover why it is harmful, and where and why it is successful.

To support your argument, you "study capitalism" . . . while ignoring the harmful results of all previous attempts at communism. If anyone calls you on it, you just say "That was the old communism, I'm talking about neocommunism."

I understand that Major Nav is creating fictional dialog, but really: I have written (by a rough estimate) a half-million words on the blog, all searchable. Is it too much to ask that Major Nav actually quote me?

And I'm unsure of precisely what Major Nav is accusing me of here? Do I ignore the bad effects of communism, or do I recognize them, try to identify the causes, and change my ideology to account for that recognition? Is Major Nav trying to simultaneously accuse me of both dogmatism and rigidity on one hand and opportunism and excessive flexibility on the other?

And you seek out authors and pick through their writings to select out of the context, an idea that is close to your concept and sprinkle them through your writings. Usually as a reference to the obscure article vs a direct quote. As if anyone else has read the article.

What!? I engage with the scholarly literature, and cite and link to my sources? How rude! Can you get any more intellectually dishonest? I hang my head in shame.

If all you have is a hammer, everything looks like a nail. Put down the hammer once in a while, take a step back and chose another viewpoint.

This metaphor makes no sense. What does the "hammer" represent? A hammer is a tool, not a conclusion, and certainly not a viewpoint.

Let me reiterate:

I write about controversial topics. I already know that people disagree with me. I am absolutely uninterested that you personally disagree with me. I'm not particularly interested that people I already know and respect disagree with me or that people with impressive credentials disagree with me; if you're an anonymous, uncredentialed commenter, I care even less.

On the other hand, I'm very interested in specifically where and how you think I'm fooling myself. But remember, fooling myself is a methodological criticism. It is not only useless but also the epitome of dogmatic obtusity to assert, as does Major Nav, that I must be fooling myself just because I have come to a conclusion you disagree with.

Like any social person, I get irritated when people insult me. But being insulted has absolutely no effect on what I believe or understand; I have never changed my mind because someone insulted me, no matter how well I know the person or how highly I value their good opinion. If your intention is to gratuitously irritate me, go ahead and insult me. You'll get one (maybe two) shots, and then I will, like any rational person, simply refuse to engage with you.

Tuesday, March 18, 2014

On method

Recently, a couple of comments (here and here) prompt me to talk about method.

I take Feynman very seriously: "The first principle [of scientific integrity] is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that." I try, indeed I try very hard, to apply this principle in my criticism both of others and of myself.

Feynman's assertion raises two questions. First, what does it mean to fool oneself? Second, can we distinguish between fooling oneself and not fooling oneself, and if so, how?

Feynman makes it clear that fooling oneself is a matter of method. In his view, when someone distorts an argument to support a preconceived idea, he is fooling himself. Note that Feynman does not say that having or investigating a preconceived idea is by itself fooling oneself; fooling occurs when the investigator allows his preconceived idea to distort the argument. So what does it mean to "distort" an argument? Feynman uses subsequent investigations of Millikan's electron charge experiments as an example: the investigators distorted the data to get values closer to Millikan's answer, i.e. their preconceived idea of what the answer "should" be.

If fooling oneself is a matter of method, then it should be possible to tell the difference between fooling oneself and not fooling oneself by looking at the method. Science has a lot of interesting procedures to avoid fooling oneself. The most obvious is the double blind method: neither the subject nor the person collecting the data knows whether the person is being treated or is acting as a control. Since neither the subject nor the investigator knows what the answer "should" be (the preconceived answer is that there is a difference between the measurements of treated and untreated (control) subjects), they cannot bias the measurements. The double blind method seems like an effective technique for removing one kind of fooling oneself.

It is easier to fool oneself in philosophy and non-experimental argumentation, simply because there isn't an easy way like the double-blind technique to remove bias. But there are other ways. Does the author employ or avoid well-understood logical fallacies? (In my more cynical moods, I suspect that the primary project of academic philosophy is to teach students to write so turgidly as to prevent the detection of fallacies.) Does the author critically examine opposing viewpoints? Does she try to represent those opposing viewpoints fairly and honestly? These methods are not as tight and effective as double-blind testing, but they can, in my opinion, do a lot of work detecting and correcting fooling oneself.

Which brings me to the "criticism" of my own work cited above. First, I am always looking to see if and how other thinkers, especially thinkers with whom I disagree, fool themselves or avoid fooling themselves. Which means I am looking not at their conclusions, but at their methods. For example, when I examine Plantinga's Modal Argument, I look not at his conclusion that God exists, but at his method: I argue that he has made a methodological error, a logical fallacy. The question is not whether I am "motivated" to investigate his argument because I disagree with his conclusion, but rather whether my motivation has distorted my criticism, distorted my own argument.

A lot of the criticism of my work in the comments tends to come in three categories. First, people who just insult me. I have enough self esteem that insults from random people on the internet do not cause me the least distress. The only negative effect of this sort of criticism is irritation that I have to waste my time reading and possibly moderating a useless comment. Second, people who just assert their disagreement with my conclusion, which is again a pointless waste of my time (and if the commenter intends to affect my views in any way, a waste of his or her time). I really don't care that people disagree with me. It would be a waste of my time to write about what everyone agrees with; indeed, the more controversial a topic, the more interesting.

What I really do want to know is: am I fooling myself? If you do not believe I care whether or not I'm fooling myself, why even bother to comment? I honestly don't understand it. No one but me reads the comments, especially in older threads. Why even bother to register your dissent? If I want to fool myself, or I don't care about fooling myself and others, and registering dissent actually mattered, I would just delete the comments, which I can do without detection. (And you don't know I haven't, eh? But I leave them all in, unless they simply repeat an earlier point and I want to make clear that I am unwilling to waste any more of my time reading and moderating a commenter.)

I will give nontrivial attention only to criticism that addresses my method, i.e. how I construct my argument. Am I making a logical fallacy? Have I failed to address an important opposing viewpoint or argument? Do I make unexamined or unsupported assumptions? (Note too that this is a blog, not an academic journal; many posts here represent preliminary speculation, not fully-formed arguments.)

By the way, it's really important to cite and summarize opposing views and offer a basic analysis of how the opposing view affects my own argument. If you don't cite, I have no idea what you're talking about; even if I Google a vague referece, for example Edward Feser, I don't know whether what I find is what the commenter is referring to. If you don't summarize or analyze, I have to substitute my own judgment of the opposing viewpoint for my critic's, which just introduces bias. I have a limited amount of time, and if the work of someone such as Feser superficially appears to be worthless (and his work does superficially appear to be worthless), I'm not going to waste my time without some evidence that a deeper analysis is worthwhile.

I'm open to criticism; I really do believe that I can possibly be fooling myself, that I can possibly be ignorant of really good arguments that contradict my position (which is why, for example, as a communist I study capitalist economics). I don't even mind rudeness per se, but if you are rude, especially without direct provocation, your support should probably be stronger than otherwise. But if you're not interested in helping me figure out how I'm fooling myself, don't bother commenting. I take Heinlein to heart: "Never wrestle with a pig. You both get dirty and the pig likes it."

Monday, March 10, 2014

Intro to Macro: National accounting

Our diagram from last time is oversimplified; it's missing government, a financial sector, foreign trade (i.e. it models a closed economy and any kind of stock of capital/inventory or money, but it's useful to illustrate a few concepts. The first is national accounting. Our diagram includes all the households and firms in a "nation"; as macroeconomists, we track the flow of money, i.e. how often money crosses the boundary between firms and households. We total up the flows at the end of the year, and those are our national accounts for that year.



But first, a few simplifications: accounting for housing and land rent is really weird; furthermore, economists don't consider land rent to be that economically interesting, since we can't create any more land. Therefore, we usually just ignore land and rent, and focus on labor and capital. Since we ignore land rent, and we're lazy, we often use the letter L for Labor. The "rent" that people pay their landlords is consumption of goods and services, i.e. the physical building, which has to be actually built, and which wears out over time. Similarly, building a house is investment, i.e. production of capital. However, our current model does not have any notion of a stock of capital, so we're just going to ignore housing completely for now, until we improve our model.

Transactions that count as macro flow include:

  1. Alice's household buys $100 of food from Zelda's Groceries (C = $PQ)
  2. Bob's household receives a $100 paycheck from Yarrow's Electronics (FL = $wL)
  3. Carol's household buys a new printing press for $1000 (I = $PQ)
  4. The Daily Press pays $100 rent to Carol's household to use her printing press (FK = $rK)

The equations in parentheses indicate how we account for each transaction. For 1 and 3, on the left, we have C and I: Consumption spending and Investment spending. These refer to spending on the bottom arrow of the diagram. For 2 and 4, we have FL and FK, compensation for the Factors of Production, i.e. Labor and Capital

Transactions that don't count as macro flow include:

  • Zelda's Groceries buys $100 of carrots from Andy's Farm; we'll count this income when they sell the carrots to consumers
  • Betty's household buys a $100 used car from Yarrow; no new production has occurred; we're just shuffling assets
  • Carl's household borrows $10 from Dana's household; again, we're just shuffling assets around

One interesting thing to note is that on average, the money paid to the factors of production should exactly equal the money spent on consumption and investment. Therefore, we can measure the national economy just by looking at one side or the other. Traditionally, economists look at the household spending side, because that's easier to measure. Thus, we say the nominal national income (Y) equals consumption (C) plus investment (I): Y = C + I.

This equation shows nominal income, i.e. all the variables are denominated in money; economists typically use capital letters to denote nominal values. We're also interested in real values: how much actual stuff is being produced and consumed? If we produced the exact same amount of stuff, but all the prices of stuff doubled, Y (and C and I) would double, but we wouldn't be materially better off. To handle this situation, we look at the price level (P), which is just a weighted average of the prices of individual goods and services. Therefore, real income (y) = nominal income (Y) divided by the price level (P): y = Y/P.

Another interesting thing to note is that a coin, a given physical unit of money, will be spent multiple times across the household/firm boundary. In our model, every household and firm spends all its money every sub-period, so if our sub-period is a week, and all households spend their money on Monday (go to the market), and all firms spend their money on Friday (and everyone stays home on Saturday and Sunday), then every coin will cross the household/firm boundary once a week on both arrows. The number of times, on average, a coin crosses the household/firm boundary on the income side (remember, the income side is equal to the factors side), is called the velocity of money (V). Therefore, if we have a given physical quantity of money (M), then the nominal income (Y) equals the total amount of money (M) times the velocity. Because we like to look at real income (Y = pY), we throw in the price level (P) to get the fundamental accounting identity, M*V = P*y.