Showing posts with label Scientific Method. Show all posts
Showing posts with label Scientific Method. Show all posts

Sunday, November 10, 2013

Hayek's theological epistemology

Had Friedrich Hayek simply stated that economics and the social sciences in general have the most complicated subject matter so far known, i.e. human society, and that we should formulate social policy with extreme caution because the scientific knowledge we can gain about our society is limited, he would have been correct. And he might also have been profound: it is perhaps the case that economists are too confident about their scientific knowledge. But in his Nobel Prize speech, The Pretence of Knowledge, Hayek makes a much stronger claim. Economists, Hayek argues, have made serious policy errors because they have aped the forms of science in a field where science itself does not and cannot apply. Hayek first establishes that there is a problem, the "serious threat of accelerating inflation." Hayek attributes this problem proximately to "scientism," the idea of a "simple positive correlation" between employment and aggregate demand; Hayek asserts that economists accept this idea because they employ a "mechanical and uncritical application of habits of thought to fields different from those in which they have been formed."* Hayek does not believe that this correlation between employment and aggregate demand is unscientific; he admits that it that is is the only theory for which "strong quantitative evidence can be adduced." However, Hayek believes it is "fundamentally false" and "harmful" to use to guide public policy.

*Hayek quotes himself here, from "Scientism and the Study of Society, " reprinted in The Counter-Revolution of Science.

According to Hayek, economics (and presumably all social sciences) cannot productively use the scientific method. The data necessary to construct good scientific theories are "necessarily limited" and important information may not be available. In contrast, Hayek asserts that in the physical sciences, all important information is "directly observable and measurable." Because of the limitations on the availability of data, instead of observing what is important, social scientists declare that only what is observable is important. This tendency "quite arbitrarily limits the facts which are to be admitted as possible causes of the events which occur in the real world. . . . We know . . . a great many facts which we cannot measure and on which indeed we have only some very imprecise and general information." Because these facts cannot be confirmed by quantitative measurement, they are excluded from consideration in mainstream economics. Thus, Hayek asserts, scientism causes economists to accept false theories with good scientific support, such as the causal connection between employment and aggregate demand, and reject true theories without scientific support, such as Hayek's alternative explanation of structural unemployment. Although Hayek is correct in identifying the society as the most complex object of study, his analysis is otherwise completely incorrect, and his alternative is utterly without any intellectual support.

Hayek first mischaracterizes the scientific method. Although he mentions Popper approvingly, he deprecates the notion of falsifiability and instead imputes to science a requirement of behaviorism, also known as positivism. Taken from psychology, behaviorism specifically asserts that because we cannot directly observe what is in a person's mind, the mind has no physical; at best we can merely talk about correlations between observable inputs and observable behavior. More generally, as proposed in philosophy by the Vienna Circle, positivism asserts that only that which is directly measurable has any physical meaning. The Vienna Circle, including Carnap, Popper's primary intellectual opponent, quickly realized positivism's untenability and abandoned the concept. So far as I know, no philosopher today holds that positivism is a foundational concept in the philosophy of science. Had Hayek criticized economists and social scientists for positivism, his critique would have been correct and perspicacious.

But Hayek believes that science itself requires positivism. Positivism, according to Hayek, is a routine, unobjectionable element of the physical sciences: "[I]n the physical sciences it is generally assumed, probably with good reason, that any important factor which determines the observed events will itself be directly observable and measurable." This is simply not true of any science, not even physics. In 1946, Albert Einstein criticized Ernst Mach's positivism, saying,
[Mach] did not place in the correct light the essentially constructive and speculative nature of all thinking and more especially of scientific thinking; in consequence, he condemned theory precisely at those points where its constructive-speculative character comes to light unmistakably, such as in the kinetic theory of atoms.*
Hayek's attribution of naive positivism to science is simply mistaken.

*Quoted in Einstein's Philosophy of Science, by Don A. Howard.

Curiously, Hayek opens his speech with a purely scientific critique of a simplistic correlation between employment and aggregate demand. Assuming such this simplistic correlation really is fundamental to economic theory at the time, it is falsified by the directly observable experience of inflation: a simplistic correlation would predict that increasing the money supply when there is above-normal unemployment will not cause inflation. Under Popperian falsification, Hayek's criticism is dispositive: we have directly observed events which contradict predictions of the theory, therefore, there is something incorrect or missing from the theory. However, the phenomenon of observations contradicting theory is routine in science; that a theory has been contradicted by observation does not in any sense invalidate a discipline as unscientific or "scientistic."

Hayek, however, does not conclude that the simplistic correlation between employment and aggregate demand is incomplete; he makes the much stronger assertion that it is "fundamentally false [emphasis added]." He must mean that there is no actual causal relationship whatsoever between aggregate demand; that all the observed correlations, which he admits, are either spurious or indicative of parallel or indirect causation. Hayek, however, concludes that any correlation between employment and aggregate demand is incorrect because it is measurable, and because it admits of only what proponents "regard as scientific evidence."

Hayek's alternative is that all unemployment (other than routine frictional unemployment) is fundamentally structural. Without apparent qualification, Hayek asserts that "the chief actual cause of extensive unemployment . . . [is] the existence of discrepancies between the distribution of demand among the different goods and services and the allocation of labour and other resources among the production of those outputs." Hayek, however, cannot actually prove this theory. Hayek admits he cannot offer quantitative evidence in support of his theory: "[W]hen we are asked for quantitative evidence for the particular structure of prices and wages that would be required in order to assure a smooth continuous sale of the products and services offered, we must admit that we have no such information." Hayek continues, "[W]e can never produce statistical information which would show how much the prevailing prices and wages deviate from those which would secure a continuous sale of the current supply of labour [emphasis original]." His single attempt is to declare the theory would be proven false "if, with a constant money supply, a general increase of wages did not lead to unemployment." This account, however, is obviously inadequate either to support his own theory or to challenge an account linking aggregate demand to employment, because by a "constant money supply," Hayek is holding demand constant a priori. Where Hayek does not misunderstand science, he directly rejects it.

Hayek's only "intellectual" support for both his criticism of the scientific method as used by economics and the social sciences is not just an assertion of "a mistaken conception of the proper scientific procedure"; it is essentially "theological," in that it relies explicitly on revealed truths that cannot be contradicted by experience. Hayek asserts, "We know: of course, with regard to the market and similar social structures, a great many facts which we cannot measure and on which indeed we have only some very imprecise and general information." This assertion makes no sense: if have only "imprecise and general information," then we can know these supposed facts only by revelation, not by any sort of scientific inquiry. Indeed, Hayek admits that "the effects of these facts in any particular instance cannot be confirmed by quantitative evidence." These supposed facts cannot be observed directly, their effects cannot be observed directly, and yet they are still, according to Hayek, actual facts. Hayek's justification of his own theory is thin to the point of nonexistence: "few . . . will question the validity of the factual assumptions [!], or the logical correctness of the conclusions drawn from them." Hayek engages here in a wholesale repudiation not just of positivism, but of scientific knowledge in general.

Everything about Hayek's reasoning directly mimics the arguments of creationists and pseudoscientists. Who are you going to trust, Hayek asks, your preconceptions and prejudices, or your lying eyes? That Hayek received a Nobel Prize in cconomics is as absurd as it would be to award the Nobel Prize in physics to Deepak Chopra; that Hayek is even mentioned in a university curriculum about economics is a disgrace to the discipline, not because of his ideology (everything is ideological) but because of his contempt and dismissal of the scientific method.

Thursday, October 21, 2010

Why philosophy is important

I've criticized and condemned professional philosophers here rather stridently. But why should I do so? Religion is one thing — religion is egregiously harmful — but why would I criticize a class of people for wasting their own time? Nobody has to read philosophy, and philosophy doesn't have anything like the social and political privilege of religion. I don't spend any time criticizing postmodernist literary criticism, even though I'm convinced (by Frederick Crews, whose meta-criticism I highly recommend) that it's just as much a bullshit waste of time as philosophy. To some degree, it's just personal: I invested a lot of my own time and energy into studying philosophy, and I'm peeved that this investment didn't pan out.

But there's a deeper reason.

I really do think philosophy is important, and I think professional philosophers are not only letting us down, but actively making it more difficult for honest practitioners to fill in the gap.

Nobody's wrong all the time, and I completely agree with Ayn Rand when she says,
A philosophic system is an integrated view of existence. As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation -- or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind's wings should have grown.

— Ayn Rand, Philosophy: Who Needs It
Of course, I don't agree with the philosophy she constructed — to a large extent I suspect it rests on, rather than replaces, her subconscious prejudices — but she undeniably displays the virtue of giving explicit voice to those prejudices; she brings them out where we can see them and discuss them directly.

I stumbled on this article on scientific ignorance, which brought the topic to mind. The problem is not, in my opinion, that people are ignorant of particular scientific (or economic) facts. There's too much to know in this world. Not even the most subtle, inquiring and flexible mind can learn even the basics of every scientific and intellectual discipline, even those significantly affected by her own actions and choices. Everyone needs is a general framework for dealing with the massive amount of information available in the modern world. And a general framework is nothing more or less than a philosophy. Now, more than ever, we need to develop philosophical tools to operate in complicated modern society.

Similarly, I recently had a conversation with my economics professor instructor. He dislikes Krugman because my professor believes Krugman spends too much time criticizing Bush fils. And not so much because he necessarily thinks Krugman is mistaken (like most college-level faculty, he's sensibly reticent about his own politics) but rather because he thinks Krugman as an economist ought to take a more neutral tone. People pick someone to intellectually follow, he asserts, therefore intellectual leaders have a duty of objectivity and neutrality. My response is that it's a Bad Idea for anyone to follow anyone else; there are people we generally agree with, but we should read everything critically. His rejoinder, which of course I had to admit, was that people actually do more-or-less uncritically follow intellectual leaders.

Still, I think my point is valuable at a higher level. Although people do in fact follow others, they shouldn't do so. More importantly, I strongly suspect that the reason people follow others is not that they cannot think critically, but they have been socially habituated to thinking uncritically.

Critical thinking does not seem that difficult; it doesn't seem to require exception intellectual power. I hold myself up as an anecdotal example: I have only average (or slightly below-average) "raw" intellectual power, both memory and processing speed. I pride myself, however, on my ability to have an intelligent conversation with an expert in almost any intellectual discipline, from quantum mechanics to postmodernist literary criticism. The "secret" is to focus on the methodology and let the expert supply the facts and background. There really is only one good methodology: the scientific method. Understand this one method, and you can follow the reasoning of any expert in any discipline... at least any scientific discipline.

It's really easy: ask an expert to tell you a story; when she gets to something that doesn't seem to make sense, ask her: how do you know? If you're dealing with a real scientist, her eyes will light up and she'll really get interested, and she'll tell you how she knows, and you'll understand her explanation. It'll always be in exactly this form: we looked at something; if the intuitive explanation were correct we would have seen that, but instead we saw this.

More importantly, the focus on method allows anyone to detect egregious bullshit in any discipline. You don't actually need the facts and the background: look at the method: if a practitioner is not using that method to come to his conclusion, then he's bullshitting you. (Which is of course not to say that the conclusion is definitely false.) And if practitioners of some discipline generally fail to use the scientific method, then the entire discipline is bullshit. Ask the practitioner: how do you know? He'll become uninterested or even hostile. He'll tell you, "it's complicated," or he'll start spouting some incomprehensible mumbo jumbo.

It's not trivial to understand the scientific method; you have to do your own intellectual work there. If you don't thoroughly understand the method, a skillful bullshit artist can slide his bullshit into the gaps in your understanding. (More precisely, different bullshit artists hide their bullshit in various gaps; if those gaps match your own, you'll see fail to see that particular artist's bullshit.) But it's not that hard; the primary difficulty seems to be cognitive dissonance caused by applying the scientific method to one's subconscious prejudices.

And that, I think, is what philosophy ought to do. An entire class of academics — the scientific academy — have eliminated bullshit with good success from their individual disciplines. I don't see any compelling political or intellectual reason why the philosophers cannot do the same in general. I think I have outlined good reasons why they should do so: the scientific method is enormously useful, members of a complex society require a general method of separating sense from bullshit, and the concept of a general way of thinking is dead-center in the domain of philosophy. We might relegate exploring alternative methodologies to a subset of philosophy, but we need someone — and who better than philosophers — to confer substantial academic privilege on the scientific method in general, and to exercise that privilege to the general elimination of bullshit.

Academic philosophers do not, on the whole, seem to do so, and even those I think are clued in seem to passively accept the really absurd level of egregious bullshit in their profession (on the ground, I think, that we cannot be absolutely certain the scientific method is really universally applicable). Professional philosophers are, of course, free to do as they please, but until they give us something better than the irrelevant and mind-numbing bullshit they presently supply, they will not have my respect or admiration, and I hope you will withhold your own.

Thursday, July 29, 2010

Science and metaphysics

It is uncontroversial — or at least correct — to note that scientific naturalism requires some metaphysical structure. It not the case, however, that the specific idea of causality is part of that metaphysical structure. And not only is induction not part of the metaphysical structure of science, it is not even a valid inference rule in scientific naturalism.

Popper* gives us a useful definition of "metaphysical" in the Logic of Scientific Discovery. A metaphysical statement is a meaningful statement that is not in principle falsifiable by experience. Not all unfalsifiable statements are metaphysical — some are simply meaningless — but all metaphysical statements are, by definition, unfalsifiable. Note that this definition is itself metaphysical: the definition is (or at least appears to be) meaningful, but there is in principle no empirical observation we could make that could falsify it.

*I invoke Popper here not to establish authority but to give credit.

Another example of a metaphysical system is the set of rules that define the game of chess. There is no empirical observation that could, for example, falsify the rule that bishops must move diagonally in a straight line. If we observe a player move his bishop horizontally, we can conclude only that either the player has made an error or that she is not playing chess. (Note that the statement that "human beings consider chess to constitute thus-and-such rules" is a scientific statement: we can observe how human beings define chess, and in principle falsify the statement.)

Popper departs here from the Logical Positivists, the latter assert that all statements neither verifiable nor falsifiable by experience are not meaningful in any sense. Popper in contrast admits that unfalsifiable statements can be meaningful.

Popper departs as well from a common theme in philosophy, the theme of metaphysics as a synonym for ontology. In his demarcation criterion, Popper establishes a metaphysical "rule" of scientific naturalism: unfalsifiable statements are ontologically meaningless. If a statement is empirically unfalsifiable, is is for that reason categorically not a statement about the world. If it can be charitably interpreted only as looking like a statement about the world, then it is nonsense — "not even wrong" — having at best only the appearance of meaning. This principle does not deny all meaning of unfalsifiable statements, only a specific kind of meaning.

In a similar sense, the statement, "The bisectors of two angles of a triangle intersect inside the triangle," is a meaningless statement of Euclidean geometry. It's not true, it's not false. Specifically, the word "inside" is a term without referent anywhere in Euclid's axioms. We have to create a different context — e.g. analytic geometry — to make the statement meaningful and true.

Thus scientific naturalism — being itself metaphysical — is not a statement about the world. It is, in essence, a language game we play. One is free to play any language game one chooses, including religious language games and the language game of calling religious people jackasses whose views on reality and morality are at best ridiculous and at worst malevolent.

Popper's construction gives us a metaphysical framework to rigorously discuss meaningful ontological statements — i.e. statements about the world — that are not directly empirically observable. We cannot, as Hume noted, observe causality: all we can observe is that one event usually or always follows another in time. But we can falsify a causal hypothesis: We can hypothesize that event X causes event Y, i.e. that event Y will always follow event X. If we were ever to empirically observe that event Y did not follow event X, our hypothesis would be proven false; we must change something: the hypothesis itself or something in its theoretical framework.

Scientific naturalism does not deny the meaning or truth of statements that in a sense transcend empirical observation, i.e. statements whose truth or falsity we cannot directly determine by observation. Scientific naturalism not only admits statements that "transcend" empirical observation, but gives us a rigorous way of determining which transcendent statements are meaningful and a rigorous way of at least rejecting meaningful empirically transcendent statements as definitely false.

Of course, scientific naturalism does deny the meaning of statements that transcend empirical observation in a different sense, i.e. statements interpreted as about the world that cannot in principle be falsified by empirical observation.

Intelligent Design is an excellent example. At first, to their credit, cdesign proponentsists ID advocates proposed empirically falsifiable statements: there were structures — the bacterial flagellum, for example, or blood clotting mechanisms* — that could not have evolved (except perhaps through wildly improbable coincidence) through the unintelligent, purposeless and intentionless mechanisms of uncorrelated heritable variation and natural selection. However, as the candidate structures have been shown to have a plausible evolutionary history, ID advocates have retreated to unfalsifiability: perhaps there is an intelligent designer whose work cannot be empirically distinguished in any way from the work of unintelligent mechanisms. To the scientific naturalist, such a statement is not just outside the boundaries of science, it is outside the boundaries of meaning. It cannot be a statement about the world, it is not even wrong, it is no more meaningful than the assertion that all gnorts are kerfibble.

*The triviality of the proposed structures is itself suspicious.

Scientific naturalism excludes some statements as meaningless, statements that appear to have meaning, that are grammatically correct, that do indeed activate our minds in interesting and complicated ways. Perhaps it's the case that scientific naturalism is simply limited, in the same sense that Euclidean geometry is limited and cannot discuss concepts such as "inside" or "outside". There's no way to prove that scientific naturalism is not limited, that statements rejected by scientific naturalism cannot have meaning and truth in some other system.

The best we can say — and it's pretty good — is that scientific naturalism has in a couple of centuries given us a profound understanding of the physical universe from the cosmological to the subatomic, a technological civilization that can feed, clothe and house more than six billion people and has at least the potential for real humanistic justice and universal prosperity, and is beginning to crack the mysteries of consciousness and human behavior. In contrast, after more than two millennia religion has given us nothing but mystical mumbo-jumbo, ridiculous self-serving and self-aggrandizing fairy tales, repression, oppression and the near-constant support of even the most monstrous and abhorrent ruling classes that would maintain the privilege and status of the priesthood.

Saturday, February 13, 2010

Evidentiary and deductive reasoning

Evidentiary and deductive reasoning are two related but substantively different modes of reasoning.

Deductive reasoning is the reasoning mathematicians typically use, at least when they are creating proofs. We use deductive reasoning when we one or more statements as axiomatic*, i.e. "true" a priori or by definition, and we serially apply a specific, finite set of mechanical inference or transformation rules to those statements, one rule at a time. A set of axioms and inference rules comprises a formal system. By definition, the theorems, i.e. any and every statement generated in this manner, regardless of the order the inference rules were applied, is also "true". Douglas Hofstadter goes into the deductive process in great detail in his book, Gödel, Escher, Bach: An Eternal Golden Braid. Simple deductive systems typically use propositional calculus or first-order logic as the inference rules, so we typically distinguish different systems by their axioms. Start with Euclid's axioms and you have plane geometry; start with Peano's axioms and you have natural arithmetic.

*We can also use an axiom schema, a rule for producing axioms. We can, however, consider an axiom schema as a simple formal system with no loss of generality.

Using deductive reasoning, I can write a simple computer program to print out true theorems of any deductive system faster than a roomful of mathematicians. The inference rules are mechanical and deterministic: each inference rule produces exactly one output for any given input. Therefore I can write a computer program that takes the first axiom, and applies the first inference rule on that axiom to generate a theorem and prints the theorem. Then program then applies the second inference rule to the axiom and prints out that theorem. Once we've applied each inference rule to the first axiom, we apply each inference rule to the second axiom, and so forth. We then repeat the process of applying each inference rule to the theorems generated in the first round. If we have an infinite amount of memory (to remember all the theorems we've generated) and an infinite amount of time, we will print every theorem of the formal system.

But of course we don't have infinite memory and time. In fact, with this brute-force method we will quickly exhaust even a universe-scale computer before we ever get to an "interesting" theorem, such as the theorem of arithmetic that there are infinitely many prime numbers. We might never even get to "1+1=2"! Cleverness in deductive reasoning consists of finding the chain of inference rules that leads to "interesting" theorems. (Indeed two extremely clever people, Alfred North Whitehead and Bertrand Russell, require 362 pages to lay the groundwork to prove that 1+1=2, and do not complete the proof until 86 pages into the second volume. We would require Knuth notation to describe the number of universes required to find this proof by brute force.)

The deductive method poses some deep and interesting philosophical problems, but if we use simple enough inference rules , we always know with absolute certainty that our theorems are "true"... or at least they are as "true" as our axioms. (Philosophers typically more-or-less understand and use first-order logic, which is known to be consistent, and known to be insufficiently powerful to express all "interesting" conjectures. Mathematicians, I suspect, roll their eyes in tolerant amusement when philosophers get all excited about the self-referential weirdnesses in more more powerful systems.)

But we don't always know, or cannot arbitrarily specify, a set of axioms and inference rules; all we know are the "theorems". This is basically the situation we're in regarding our experience: our experiences are like theorems, and our goal is to discover the inference rules (basic and abstract natural laws) and/or the starting premises (what happened in the past) that connect these experiences. In these cases, because we do not have well-defined and pre-specified axioms and inference rules, we must use evidentiary reasoning. The experiences or "theorems" are the evidence, and we want to discover the axioms (or at least other theorems) and inference rules that connect and explain that evidence.

(Philosophers made a valiant effort to put science on a purely deductive footing with Naive Empiricism (a.k.a. Logical Positivism): our observations are axioms, we use the "universal" a priori rules of logic as our inference rules, and attempt to deduce the underlying natural laws and earlier conditions using this formal system. Unfortunately, it didn't work, for a lot of reasons.)

We still use deduction in evidentiary reasoning, because we want to express the connections and explanations with the same sort of mechanistic, deterministic rigor that characterizes deductive reasoning. But in evidentiary reasoning, deduction is only a part of the process; it's not helpful to say that the deductive theorems are just as "true" as the axioms, because we're in doubt about the axioms and inference rules themselves.

We find it convenient to separate evidentiary reasoning into two primary modes. The first mode is to discover inference rules. A convenient and efficient way to discover inference rules is to use experimental science: very precisely observe (or experience) what's "true" at one point in time, wait, then observe what's true a little later, and propose inference rules ("laws of nature") that would rigorously explain the transformation. The controlled experiment refines this process even further, since it's very difficult to actually observe everything that's true at any point in time. Instead we create two situations that are as alike as possible in all but one element, and then a little later observe what's true about those situations, and propose inference rules to rigorously explain the difference in the outcomes in terms of the difference in the initial conditions.

The second mode is to discover the initial or preceding conditions when we can observe only the resulting conditions. A convenient and efficient way to discover preceding conditions is historical science: take the inference rules we have discovered from experimental science and propose initial conditions that those inference rules would have transformed into what we presently observe.

Evidentiary reasoning appears much more difficult that deductive reasoning, at least to do consciously. In every literate culture, we see the development of mathematics follow almost instantly on the heels of literacy. It took Western European culture, however, nearly two thousand years of literacy and mathematics to develop and codify evidentiary reasoning, and (AFAIK) no other culture independently developed and codified evidentiary reasoning and used it on a large scale.

On the other hand, perhaps paradoxically, evidentiary reasoning does not require consciousness or codification. Biological evolution itself is an "evidentiary" process: we try out different "formal systems" (biological arrangements of brains) at random; organisms with brains that fail to accurately model reality do not survive to reproduce and are selected against.

With simple enough inference rules (which do give us considerable power) we can be rigorously certain not only that all of our deductions do correctly follow from our axioms, but also that our inference rules never produce a contradiction (eliminating half the possible statements as non-theorems, statements that cannot be generated from the axioms and inference rules) and that all possible statements are definitely theorems or non-theorems. Philosophy typically uses propositional calculus (provably consistent and complete) or first-order logic (consistent and semicomplete). Higher-order logic, however, confuses most philosophers.

Evidentiary reasoning also does not give us the kind of confidence we can get from deductive reasoning. We have only a finite amount of evidence (our actual observations and experience), but there are an infinite number of possible formal systems that would account for that evidence (i.e. the facts in evidence are theorems of the formal system). Furthermore, it might be the case that there is no formal system that accounts for the evidence. It might be the case, for example, that the universe is infinite and truly random, in which case a set of observations and experiences that looks like the workings of every underlying set of natural laws modeled by a formal system will occur at one point or another.

Therefore we have to apply additional formal criteria to evidentiary reasoning for it to have any utility. The additional criteria are simplicity and falsifiability. The criteria of simplicity specifies that if more than one formal system accounts for the evidence, we prefer the formal system with the fewest axioms and inference rules. (A corollary of the simplicity criterion is that two formal systems with the same theorems are equivalent.) But the simplicity criterion isn't enough, otherwise we would prefer the simplest "degenerate" explanation that all statements are true: obviously all statements about evidence follow from this explanation. The criterion of falsifiability specifies that only formal systems where statements that contradict true statements about observation or experience are non-theorems are interesting.

Note that simplicity is not a criterion of deductive reasoning: the most complicated proof in the world (such as the four color theorem or Fermat's last theorem) are just as good as the most elegant, compact proof. The criterion of falsifiability has an analog in the deductive criterion of non-contradiction, but it's more trivial: it specifies that exactly half of all decidable statements are theorems and the other half non-theorems (i.e. if X is a theorem, then not-X is a non-theorem, and vice-versa. There are some interesting exceptions to this rule, sadly beyond the scope of this post.)

Although related, deductive and evidentiary reasoning work in "opposite" directions. Deduction asks the question: what interesting statements are theorems of this formal system? Evidentiary reasoning asks the opposite question: in what formal system are these interesting statements theorems?

Monday, February 01, 2010

What is Naturalism?

Larry Moran inquires as to the difference between methodological naturalism and philosophical (or metaphysical) naturalism.

The biggest issue, however, is that we don't have a rigorous, precise and most of all consistent, definition of any sort of "naturalism", methodological, intrinsic, qualified or otherwise.

There are at least three definitions I'm aware of. (I've applied entirely ad hoc, arbitrary labels.)
  1. Nonteleological Naturalism: the universe is fundamentally not intelligent, sentient or conscious.
  2. Internal Naturalism: all scientific explanations should reference causes "inside" the universe.
  3. Epistemic Naturalism: science can only establish empirically falsifiable explanations by trying and failing to falsify them.
It's tough to pin them down, but its my understanding that theists typically define "supernaturalism" as contravening all three definitions: God is a teleological being, outside the (physical) universe, who cannot be explained or described using empirical falsification.

Nonteleological Naturalism is fuzzy; we don't understand teleology very well. Furthermore, if we provisionally define "teleology" as "the sort of thing that human beings do (whatever that might happen to be)" it's fairly obvious that teleology is operative within the physical universe. It is both unjustified and unnecessary to assume the universe is fundamentally nonteleological; we can instead draw conclusions about fundamental teleology or nonteleology from empirical evidence.

Internal Naturalism is in one sense incoherent: one definition of "universe" is "everything that exists"; if a god exists, it is by definition part of the universe or the universe itself.

Adding a qualifier and talking about just the "physical" universe doesn't help much: what precisely does one mean by "physical" in this context? Furthermore, even if we could adequately define "physical", an interventionist god would by definition have to have to leave some sort of physical "fingerprint", which would then be by definition within the domain of science (and from which we might draw conclusions about the "part" of God outside the "physical" universe.)

This leaves us (assuming there are no alternative definitions) with Epistemic Naturalism: science can deal only with falsifiable theories (note that a statement that is confirmable is also falsifiable).

This definition has the added advantage that there seem to be statements — even seemingly prosaic statements — that seem semantically propositional (i.e. truth-apt, could be true or false) that talk about existence, and that are not falsifiable: statements about "the matrix", the assertion that not the real Mona Lisa but a perfect replica hangs in The Louvre, etc. The existence of unfalsifiable ontological propositions injects "philosophical life" into the discussion: we have real, constructable, apparently comprehensible sentences about the world that at least seem to have truth value, and which science seems to exclude a priori from consideration.

Wednesday, June 24, 2009

The Falsifiability of the Labor Theory of Value

I'm reading Studies in Mutualist Political Economy by Kevin A. Carson, recently presented by db0. I'm about 10% of the way through (p 75 of 822). The author is kind of long-winded and drops a lot of names without reproducing or even summarizing arguments, but there's some good stuff there.

The first part of the book is a defense of the labor theory of (exchange) value: First, that labor creates economic value, and second, that the exchange value of a commodity is the amount of socially necessary labor time to create that commodity. This theory stands in opposition to the subjective theory of value, that the exchange value of a commodity is determined by subjective use-value, and the scarcity theory of value, that the exchange value of a commodity is determined by the scarcity of its supply relative to the demand. There's also the marginal theory of value, but that's a topic for another post.

The subjective theory of value seems hard to make falsifiable: how do you quantify the subjective perception of value independently of the actual exchange price. The scarcity theory applies even in theory only when supply is inelastic, but it's fairly obvious that the production of most commodities — food, oil automobiles, DVD players, computers, buildings, etc — is elastic; the most significant inelastic item of value is real estate.

However, Carson makes an enormous philosophical error:
As Mises wrote, the variables of the market are so many that no laws can be induced from mere observation, without the aid of valid starting assumptions established on an a priori basis. ...

If an adequate theory of value requires a high degree of predictive value concerning concrete prices, then both the labor theory and subjective theory fall apart equally. On the other hand, if value theory in the sense of an empirical rule for predicting concrete prices is impossible because the variables are too many, then both theories are likewise on equally untenable ground. But like Mises' subjective theory of value, our version of the labor theory is a set of a priori axioms and the deductions from them, which can be used to more usefully interpret market data
after the fact. [emphasis original] [pp. 75-76]
It's just bullshit to be concerned with the after-the-fact interpretive value of a theory.

It is a fallacy of naive empiricism that one must "induce" laws from mere observation. All scientific laws are not a priori assumptions, but hypotheses from which we derive empirical predictions; if the predictions match the observations, the theory is supported; if not, it must be somehow revised. Empirical observation is the context of justification, not the context of discovery.

The Labor Theory of Value makes several empirical predictions about concrete prices. Most importantly, there should be a substantial correlation between measurable socially necessary labor time and actual prices in any economy. Furthermore the correlation should become when we control for independently determinable variables representing externalities, such as physical and socially constructed inelasticity. (We could also independently determine differences in external variables when two unrelated commodities had similar labor times but very different prices, or similar prices but different labor times.)

The Labor Theory of Value makes dynamic predictions. Assuming we were to find a correlation between labor time and price, we can then determine a general average correlation over many unrelated commodities. The Labor Theory of Value predicts the supply of commodities that had a price/time ratio lower than average would fall over time, and the price/time ratio would increase; likewise the supply of commodities that had a price/time ratio lower than the general average would rise over time, and the price/time ratio decrease.

Scientists and statisticians — especially biologists and ecologists — have figured out a lot of ways to test hypotheses under conditions with a lot of interacting variables. There's simply no reason not to apply these tools and techniques to the Labor Theory of Value.

Carson does note the Positivist fallacy of the Austrian school of economics:
The Austrians have made a closely related argument: that equilibrium price is an imaginary construct that can never be observed in the real marketplace. [p. 76]
We don't need to directly observe anything for it to have scientific validity: it just has to be an ineluctable component of a theory that overall matches what we can directly observe. Furthermore, we can actually observe some sort of equilibrium: all similar DVD players, tomatoes, automobiles, etc. all cost about the same from store to store and from day to day, even though there is (usually) no collusion or intentionality to keep prices stabile.

Saturday, October 18, 2008

Intuition and scientific thought

[Inspired by the comments from Thoughts from a Sandwich.]

There's no question that the universe is "organized" in the sense that it's very highly likely (although not certain) that the universe is not just a collection of unconnected, random events. Nobody (besides a few radically skeptical philosophers) disputes this conclusion.

The real question is: what is the best explanation for this organization? Intuition is fine in its place, but coming up with the best explanation requires more, much more, than just intuition.

Our intuition is part of our natural cognitive apparatus, our brains viewed at an abstract level as our minds. It is the result of our past, biologically and socially evolved ways of thinking about the world, which worked under the circumstances our ancestors lived in to solve the problems that affected their survival and reproduction.

We can have some confidence in the reliability of an intuition, but only under the circumstances under which the intuition evolved and for which the intuition was subjected to selected pressure. Whenever we extend our intuition past those circumstances, we can have no confidence whatsoever that our intuition actually applies reliably to the expanded, non-historical circumstances.

We appear to have obtained the ability to consciously think scientifically by accident. Indeed the ability to consciously think at arbitrarily high levels of abstraction appears to be accidental. Conscious scientific thought is a mere five hundred years old, far too little time for biological selection pressures to have made any impact on our physical brains and time for social selection to have made only very little impact on our traditional, learned ideas.

We find something very interesting when we double-check our intuition using scientific thought: Under those circumstances where we see very strong selection pressure for accurate intuitions, the result of our intuition matches the result of scientific inquiry. For example, our intuitions about the macroscopic properties of ordinary objects (mass and size; rocks and trees) matches very closely our scientific understanding of those properties. There is no metaphysical a priori reason for this correspondence; there's nothing in the definition of scientific thought that logically entails it must match any particular intuition. The correspondence is a posteriori, "after the fact". The a posteriori correspondence goes both ways; science specifically gains credibility precisely because it does match our reliably predictive day-to-day intuitions about macroscopic things.

When science fails to correspond to our intuitions, it is always the case that our intuitive predictions of what we should see fails to match what we actually do see. This "failure mode" is a priori: it follows from the definition of science, which takes what we do in fact see as an authoritative epistemic foundation.

When taken out of their original context, our intuition becomes radically unreliable. We don't need to compare our intuition against scientific thought; we need only observe: our intuition leads us to expect to see one thing, and we in fact see something radically different. Counter-intuitive findings of science are all over the place; Lewis Wolpert has written a whole book on such examples, The Unnatural Nature of Science. (Wolpert uses "unnatural" in the sense of "counter-intuitive", not "supernatural".)

And this is what I think Sagan means when he says, "But I try not to think with my gut. If I'm serious about understanding the world, thinking with anything besides my brain, as tempting as that might be, is likely to get me into trouble." [thanks, Dagood] When taken out of its original context, it is misleading to rely solely on an intuition; we must employ the more rigorous methods of conscious science. We can employ our intuition for conjectures, but we cannot rely on our intuition to just hand us the whole truth, or even in many cases anything close to the truth.

Sunday, April 13, 2008

Falsification and likelihood

A couple of years ago, Mike the Mad Biologist compared likelihood statistics to falsification (he mentions the article today).

Let me first say that Mike is absolutely correct that likelihood statistics is a more productive approach for many many scientific problems. But Mike forgets, I think, that philosophy and working science have different goals and evaluate methodologies in different ways.

Regardless of what Karl Popper might have actually intended, he was not a professional scientist. Any scientist would be foolish, I think, to take Popper's writing as a guide to how to conduct an actual practical scientific inquiry on a specific question. Philosophers are usually — when they are doing anything productive at all — interested in creating the weakest possible definition for a term, the minimal definition.

I'm not a professional statistician, and I don't know that it's always possible to prove falsifiability from likelihood statistics (but I suspect that's the case). However, I can say that by Popper's definition every scientific inquiry using likelihood statistics is in principle also falsifiable.

Popper's falsifiability criterion is not necessarily the best way to do science. It is best viewed as the simplest possible way of distinguishing science from non-science. The falsifiability criterion has the advantage of not itself depending on any statistical assumptions. (Statistical assumptions are required, of course, when the hypothesis itself is statistical.) All that is necessary is that you have a way, some way, any way, of determining if the hypothesis were false.

Monday, March 17, 2008

Zombie Feynman

Zombie Feynman says, "'Ideas are tested by experience experiment.' That is the core of science. Everything else is bookkeeping."

Friday, February 22, 2008

Generalizations and universals

Evidentiary arguments rely on the elevation of generalizations (a lot of X are A) to universals (all X are A). This technique, though, is analytically fallible, and we employ the technique out of desperation.

If there are any universals at all, then the elevation of a generalization to universal can sometimes be true even if it is not logically valid (and thus always true). Since all universals are generalizations, if there are some universals, then some generalizations are universals.

First, We can't be certain that any particular generalization really is a universal, but we can be certain that some generalizations are not universals by discovering a counterexample; a generalization that really is a universal will have no counterexamples. Hence Popper's criterion of falsifiability. If it is logically impossible to observe a counterexample for some generalization, then the generalization is not a generalization; it's an analytical statement.

We can observe counterexamples by the same means that we observe examples that do fit the generalization: Just making the generalization in the first place means that we can directly determine that some thing is X, and we can directly and indepentently determine that an X is in fact A.

For example, if a "crow" is defined to be "a black bird", then by definition we cannot find a crow that is not black; by virtue of being non-black, the being, whatever it is, is by definition not a crow. Since we cannot determine whether a crow is black independently of determining that some thing is a crow, it's not a generalization to state that "all crows are black". If it's not a generalization in the first place, we can't reason from a generalization to a universal.

A subtler way of defeating a generalization is to hold it true "come what may" and always adjust other statements around any observations. This technique is not entirely illegitimate, but it's more honest and clear to explicitly phrase such a statement as a definition. All the statements that one has adjusted around the statement held true "come what may" contribute to the analytical definitions of the terms used in the statement.

To come around to our original point, the evidentiary argument for Intelligent Design relies on the generalization that:
    IDg: A lot of complex* things with an independently determinable origin were intelligently designed (by human beings)
and therefore we conclude the universal
    IDu: All complex things — even those with an origin that we cannot independently determine — were intelligently designed
*With "complex" standing in for a lot of specific features that human artifacts uncontroversially share with terrestrial life forms.

(Intelligent Design looks moderately acceptable, but only so far; it won't be until the next post that I'll start exploring more features of reasoning from generalizations to universals that ID starts to fail catastrophically.)

We do want to avoid specious analyticity. If we define "intelligent design" as "any process that produces complexity" then we aren't saying anything other than "complex things are the result of some process." This trivial generalization is not at all a matter of controversy. Worse yet, by defining "intelligent design" in such a manner, we practically beg the reader to import connotations (intention, memory, desire, will) that have been explicitly excluded from the definition. (Only a lawyer, theologian or philosopher could love the fine line between actually lying and intentionally leading the reader to a false conclusion.)

We also want to avoid holding the generalization true "come what may"; we want to avoid defining all our other terms around holding as true the statement "All complex things are the result of intelligent design," or, worse yet, accepting new statements willy-nilly for the only reason that they support the truth of the statement.

Wednesday, February 20, 2008

Evidentiary arguments

In my previous post, I mentioned the (apparently) evidentiary argument for the intelligent design of terrestrial life. I promised to look at the fundamental flaws of this argument, but I have to digress into why we use evidentiary arguments, the inherent logical flaw in evidentiary arguments themselves, why it's pragmatically useful to work around that logical flaw rather than discard the evidentiary mode of argumentation altogether. We can then look at whether it's even possible to work around the flaw, and, if possible, the techniques that we can employ to do so.

Then we can look at how the (apparently) evidentiary argument for intelligent design fails to work around that flaw.

The fundamental flaw of all evidentiary arguments is that they elevate, at some level, a generality to a universal: From the basis that a lot of X are A we conclude that all X are A. Because some generalities are not universals, this feature of evidentiary arguments gives philosophers — even atheist philosophers such as Hume — conniption fits. Because, yes, it's not a universally valid logical operation. We can't be logically certain that the conclusion of an evidentiary argument is true — even given accurate evidence — in the same way we can be logically certain of the conclusions of deductive arguments given true premises. Since it is analytically false that some X are A entails that all X are A, we can confidently conclude that evidentiary arguments are analytically fallible: We don't even need to look at specific counterexamples to make this determination.

Since evidentiary arguments embed a principle which is not universally logically valid, no one would choose to use them except out of desperation. But we are indeed desperate. While we can be absolutely certain that logically valid operations always draw true conclusions from true premises, we have no way at all of having any idea whatsoever that our premises are indeed true. Premises are by definition not themselves logically deduced. The only reason to employ evidentiary arguments is that they are (or seem) pragmatically effective at making predictions about reality (or our subjective perceptions of reality) in a way that purely deductive arguments completely and totally fail.

If some philosopher or logician can find a way to give us interesting and pragmatically useful ways of predicting reality with deductive certainty, I'll be the first to nominate her for philosophy's equivalent of the Nobel Prize. Until then, we have to do the best we can with what we have, and try to find a way — in the absence of certainty — to make evidentiary arguments as confident as we can manage.

Tuesday, January 15, 2008

Metaphysical objectivism

In Bruce's response to my essay, he appears to argue that metaphysical objectivism (a.k.a. metaphysical realism) is an intrinsic part of science.
When Popper talks about scientific discovery, he implies that truth is out there as yet independent of observation. ...

I’ve (albeit casually) quizzed scientists on their belief in an objective reality independent of the observer and its importance to their work as scientists. Generally their views are similar to what I’ve attributed to Popper. ...

Heck, popular skeptics... aren’t exactly in denial of metaphysical objectivism either. Michael Shermer of the Skeptics Society (not speaking on their behalf) has opined his view of an objective metaphysical reality (as oddly un-Kantian as this may seem for a skeptic). ...
Even taking Bruce at his word about Shermer's views (he doesn't cite any source, and his latest essay shows his... relaxed... standards about careful reading) should we conclude that scientists and skeptics are indeed committed to metaphysical objectivism? I argue that yes, scientists do believe objective reality, but no, this belief isn't metaphysical in any nontrivial sense. The notions of "reality" and "objective reality" are falsifiable, and thus scientific, not metaphysical.

(Be forewarned: I unapologetically take some cheap shots at philosophy and philosophers in general along the way.)

It's not even clear that all scientists believe that their work has much to do at all with reality. In The Universe in a Nutshell Stephen Hawking opines:
[A] scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested.
Of course, Hawking speaks only for himself, but he's famous enough that we can conclude his views are shared by a nontrivial number of other scientists.

Conspicuous by its absence in the above quotation is any mention of reality. To the extent we can draw conclusions about Hawking's philosophical view on reality, we must conclude that scientific work has nothing at all to do with this metaphysical objective reality: science describes and codifies observations, not "objective reality".

(I suspect, however, that Hawking is being politely disingenuous here. I myself am not quite so polite: I'll say right out loud that I read this statement as a big "fuck you" to theological philosophical bullshit: "Yes, doofus, we scientists are talking about truth and reality, but three thousand years of philosophical mental masturbation have sown too much confusion about these terms; I want to avoid these idiots and do some science.")

I suspect that — Hawking notwithstanding — most scientists believe at some level in an objective reality, with existence and properties independent of the human mind, and that this objective reality can be accurately described by scientific means.

We can assume arguendo that scientists are typically unreflective about their belief in objective reality. (To the extent that they are reflective, at least publicly, one can examine their arguments directly and not appeal to mere facts of belief.) Scientists are not philosophers. The existence of objective reality seems (outside philosophical circles) uncontroversial; no scientist is going to get grant money to publish a peer-reviewed paper establishing that yes, reality does in fact exist.

Unfortunately, the word "metaphysical" and its derivatives have been deeply abused by the philosophical canon, bludgeoned — if one is not selective about its interpretation — into meaninglessness. In one sense, "metaphysical" is a synonym for "ontological" (pertaining to existence); in this sense, of course, any belief about existence and reality is metaphysical by definition. But that doesn't seem a particularly interesting definition in the current context. Other definitions are all over the place. If we're committed to taking into account everything every philosopher has ever said or implied was "metaphysical" in print, we're doomed to a hopeless morass of confusion and ambiguity.

(Such a morass seems to suit (at least some) professional philosophers just fine. They can bloviate endlessly about a fundamentally ambiguous topic, free of the danger that they will eventually reach an actual conclusion and thus be forced to attempt original work. Philosophy is all about the questions, dontcha know; not about the answers.)

The key word here is metaphysical. Are scientists' unreflective beliefs in reality specifically metaphysical? Does the fact that a belief is held without reflection establish by itself that the belief is therefore metaphysical?

Bruce's argument that scientists typically believe objective reality without offering any specific arguments would seem — on a non-trivial substantive definition of "metaphysical" — to be an argument from non-reflection. (Bruce might claim a trivial, vacuous or at least obviously unproblematic definition of "metaphysical"; good for him, but even in the best case of an obviously unproblematic definition, his argument would thus be a restatement of the obvious.)

If we want from philosophy something more than admiration of the philosopher's bullshitting skill, we want a meatier definition of "metaphysical" than simply "unreflective": We'd like the word to actually do some real work.

One of Popper's key points — contra the Logical Positivists — is that falsifiability is explicitly a demarcation criterion, not a criterion of meaning. Good for him: otherwise the statement, "Falsifiability distinguishes between science and metaphysics," would be unfalsifiable and therefore meaningless. If some statement is falsifiable, it is scientific; if it is not falsifiable, it is metaphysical.

(Complicating the issue somewhat is that not all unfalsifiable statements are metaphysical; some — such as "asdf ghjkl qwertyuiop" — are indeed totally meaningless. Further complicating the issue is that falsifiability is ambiguous — fatally so — regarding individual statements; an individual statement cannot even in the most abstract principle — be decisively isolated from its theoretical and linguistic context. Some individual statements, however, can justly be called falsifiable on their own merits relative to some context; other statements seem to resist being seen as falsifiable without changing the context so radically as to be unintelligible.)

If, of course, we are interested only in what actual human scientists actually think, we need not go beyond asking them what they think and cataloging their responses; we might even draw some conclusions from those responses. But those conclusions will be conclusions about what scientists think, not about the world, at least not about the world outside scientists' minds. (And even this simplistic approach has its problems. Who, for example, counts as a "scientist"? Should we admit astrologers, chiropractors, intelligent design advocates? How about psychologists, anthropologists and sociologists. Should Camille Paglia* be admitted on the same basis as Bob Altemeyer**?) Again, though, if one expects more from philosophy than mere lexicography or psychology, it seems possible investigate the issue a little more deeply.

*At best a polemicist; at worst an idiot
**A terrific scientist


If we follow Popper's definition of metaphysics as "meaningful but unfalsifiable statements", then the question becomes simply: can we falsify our belief in an objective reality independent of our beliefs? If so, it's a scientific statement; if not, it's metaphysical. Much depends, however, on how we define "independent", "objective" and "reality"; in other words, the linguistic context necessary to assign meaning to the statement is not fixed, even by arbitrary linguistic convention.

If we take independence in its strongest sense as "absolutely uncorrelated" and belief in its broadest sense of "stuff that happens in our minds", then of course the idea of an independent reality is entirely metaphysical. There's nothing that can happen in our minds that can possibly falsify a proposition defined to be absolutely uncorrelated with what happens in our minds.

But of course it is precisely this sense of "independence" that scientists are, by definition, entirely unconcerned with, at least in a professional sense. (We are no more concerned with their private opinions than we are with their tastes in food; scientists might typically consume large quantities of caffeine, but it seems unproductive to make such consumption essential to the definition of science.) The sense of realism entailed by these definitions is irrelevant to science; quite the contrary to the claim that science depends upon metaphysical objectivism or realism. As Hawking notes, science holds as foundational observations, which are, in the broadest sense, beliefs (at least when we state our perceptual experiences in words).

Clearly, some of our definitions have to be adjusted to make realism relevant to science. (Or we have to bite the bullet and say that science has nothing to do with reality... but then what does? Philosophy? Ha!)

Indeed our high-level, abstract (and perhaps naive) intuitions about reality deny this strong sense of independence: We can see and touch real things like rocks and trees. Moreover, we can actually manipulate real things: we can pick them up, put them down, break things, build things, etc. These are high-level intuitions, and therefore not a priori veridical (i.e. their content may be false), but they are authoritative: any good complete scientific theory must explain why we believe as we do. Whatever our intuitions about ordinary prosaic reality are, they do not include the strongest sense of "independence" and the broadest sense of "belief".

Can we productively take different — but not so different as to render our language unintelligible — definitions? It seems uncontroversial that we can talk about "independence" in a weaker sense, as being not absolutely but rather partially uncorrelated, and we can talk about "beliefs" in a more restricted sense, as certain kinds of things that happen in our minds.

(We have to take a little detour here, and note that "reality" and "objective reality" are not synonyms. In the sense of objective as "outside, or independent (even weakly independent) of the mind", to hold that "reality" and "objective reality" are synonyms denies that the mind is real, which is a... er... somewhat counterintuitive notion at the least. We have a trivial semantic problem with "objective" in the sense "truth-apt": Statements (in isolation or en suite) are truth-apt; it's at least vacuous and at worst meaningless to say that reality corresponds to reality. In the sense of "objective" as "determinably true", determinably would prima facie deny a metaphysical meaning.)

But... if we relax, even by a little, any of our definitions, our notions of reality and objective reality become falsifiable. If our notion of reality bears any sort of relationship to (i.e. has a dependency on) our observations, our perceptions, our experiences — i.e. our beliefs — then that notion is at least falsifiable (and perhaps even verifiable) by experience.

The most minimal construction compatible with science, "reality is that which 'causes' our perception," would seem metaphysical; it's an arbitrary definition, after all. But even then, this construction would be metaphysical if and only if we assert without justification in experience or belief in any sense that it is definitely true that something actually does cause our perception. But it could be the case that nothing can be found to have any sort of causal relationship (with "causal" defined non-vacuously) with our perception. It could be the case that simply listing out all our perceptual experience is the most compact way of "thinking" about them. In which case it seems unlikely that any mind — much less a mind capable even the illusion of linguistic communication — could possibly exist. In which case, the very presence of our minds confirms* that there is something does indeed cause perception). (Technically, it is the null hypothesis — that there is nothing which causes perception — that is falsifiable, falsified, and therefore justifies belief in the inverse.)

Likewise, we can falsify notions relating to objective reality in two ways: One is by noting that some of our beliefs can be determinably false: I can be mistaken, and discover I'm mistaken, that the Earth is flat (without substantively changing the meaning of the assertion. There are some truths that appear to be absolutely uncorrelated with some of my beliefs. Furthermore, we can observe that in certain, well defined circumstances, other people can "read our minds" by uttering the same words as we use to describe our internal experience. If my friend and I are standing next to each other looking at a tree, I can think, "I'm looking at a tree," and he says out loud, "You're looking at a tree." There are alternative explanations, but if we admit Occam's Razor — and why shouldn't we? One can hardly exclude Occam's Razor from science on either philosophical or sociological grounds — then these sorts of observations confirm and justify specifically objective reality (in all three senses listed above).

In short, that scientists — and everyone else — accept notions of reality and objective reality without (much) reflection can be explained not because they are "metaphysical" or in any sense a priori, but because they are falsifiable notions so well-established by ordinary experience that they can be reasonably accepted as true. Science does require some metaphysical assumptions (notably falsifiability as a demarcation criterion), but realism of any sort is not among them.

Sunday, January 13, 2008

Objective sensory input

At The Thinkers' Podium, Bruce claims that it is not possible to identify specifically subjective interpretations of sensory experience so that one can "subtract" them out and get at some notion of objective sensory input:
[Popper] essentially makes an argument from faith that science will objectively discover the means by which the brain interprets sensory input and thus objectify the sensory input. This is problematic.

In order for science to explain how the brain is subjectively interpreting an alleged objective, ontological reality, it has to identify a case of “distortion” by subjective interpretation (i.e. it needs an actual case to explain otherwise its explanatory power is the same as that as the ontological argument of Aquinas). A precondition of identifying an instance of subjective interpretation is to have both subjective and objective knowledge of some ontological fact.

It’s a chicken and egg scenario. Before we know how the brain subjectively treats X through observation, we need to compare subjective X to ontological X in its uninterpreted (i.e. unobserved) state. This precondition is impossible.
Besides using Randianism as an exemplar (unless one is rebutting Rand directly and exclusively, employing Randianism can always be presumed to be a straw man argument), Bruce's objections assumes premises specifically denied by scientific reasoning.

Bruce is correct: It is impossible under scientific reasoning to have positivistic, direct knowledge of "objective perception". Such observations assumed a priori to have veridical content are denied — implicitly or explicitly — in the scientific method. But having this sort of observation is not, as Bruce declares, the only way to justify belief that some aspect of perception is indeed subjectively established.

One does not need to go on at length about one's perceptual beliefs about one's toes. A more compelling and obvious example can be found in the grey square optical illusion. In this illusion, square B appears substantially lighter than square A. However, under different circumstances (such as covering up the rest of the picture except the two squares) the squares appear to be the same shade of grey. Furthermore, we can also examine the output of abstract measuring devices such as a light meter or the digital RGB values of the underlying image, leading to their own subjective impressions of seeing the same number.

Under the scientific method, as Bruce notes, the veracity of none of these observations can be assumed a priori: none can be assumed to be the "true" objective observation. Rather what we have are a collection of observations in apparent conflict. Note that all of these different modalities of observation do not yield the same sort of conflict when applied to square A and the square immediately adjacent to it in the top left (A'): all observational modalities yield "different shade". Likewise the square diagonal to A (A'') appears the same in all modalities.

To resolve this apparent conflict, we construct the simplest theoretical structure we can imagine to account for all the observations. I will leave the details to the scientists, but it seems clear even to the lay observer that the theory that our minds are supplying the difference in color because of adjacent visual context is simpler than the notion that the squares are changing their "objective" color when the context is manipulated, or when we use abstract measuring devices. Indeed we define the "objective" properties epistemically as those properties that do not change under alternative modalities; we define "subjective interpretation" as those properties which differ from alternative modalities, appearing only on direct human observation. (Furthermore, this definition entails the falsifiable hypothesis that there is some definitely identifiable computational structure in the brain, which we can identify by looking just at the neurons, that accounts for the subjective interpretation.)

The above evaluation highlights some of the objections and metaphysical "bullets" we have to bite to commit to confidence in the scientific method.

First of all, the notion of theory-laden observations (a light meter is a fairly complicated piece of equipment, and an examination of the digital pixel values of the binary version of the image rests on how computer monitors work) is not itself problematic. Even if such devices were the only way to get a different observational modality, what requires explanation is that A and B appear the same shade under these alternative modalities, whereas A and A' appear different under these modalities (and A and A'' appear the same on both direct observation and alternative modalities). Since the modalities are the same in all cases, they can be subtracted out without knowing their precise details.

More importantly, we cannot know whether some new fact will change our theory, and we cannot know if some new radically different theory will be simpler than our original theory. Not only that, we cannot measure or predict even the likelihood or probability that some new fact or theory will completely change our minds. (The only uncertainty we can actually measure is the probability that some correlation occurs by chance.) The best we can say is that any radically new theory will still have to account for the same facts, and will have to account for why our present theory appears to be the simplest.

This uncertainty is a metaphysical bullet the scientifically minded person must simply bite. The scientific method itself cannot be justified deductively or foundationally. The scientific method is justified only pragmatically: it does the job here and now we require of it, and on that basis we employ it until we have something better.

The alternative to biting the scientific bullet, though, is breaking one's teeth on a much harder bullet: Under the deductivist view, which demands either certainty or measurable uncertainty, we have not yet figured out a way of having any knowledge at all! For this reason, I find the deductivist critique of the scientific method uninteresting: No, the scientific method does not provide certainty or (Popper notwithstanding) even certain kinds of measurable uncertainty; this behavior is by design. Pointing out this lack of certainty is akin to criticizing democracy because it doesn't privilege an absolute leader, or criticizing meta-ethical subjective relativism because it doesn't provide an objectively determinable ethical system.

What is necessary to give any sort of power to the deductivist critique is an alternative account of knowledge. The best the deductivism has shown is mathematics, which simply defines itself to be correct in its axioms. And even if we admit mathematics as some sort of "knowledge", it fails to give us any knowledge about the details of (depending on how you want to phrase it) our actual world or our actual experiential life.

Update: Bruce responds to my essay

Thursday, January 03, 2008

Hypothesis testing

Via Sandwalk, we learn that psychology professor Irene Pepperberg has somehow changed her mind about the "classic scientific method" of hypothesis testing. It's not clear whether she herself chose the title, "The Fallacy of Hypothesis Testing"; the article discusses at best the limitations of hypothesis testing, not its fallacy. It's also possible she's discussing the fallacy that hypothesis testing is the sine qua non of the scientific method. But Dr. Pepperberg is not at all clear what she's actually changed her mind about.

She cites three criticisms of hypothesis testing: The scientific method doesn't tell us how to generate hypotheses, interesting questions can't be reduced to a single hypothesis, and many scientists don't appear to actually test their hypotheses. The last criticism is transparently specious: it's not a criticism at all of a method itself to observe that some people don't employ that method. (And, without actual evidence, it's impossible to determine whether her observation is even accurate. Any scientific study which employs the ubiquitous t-Test is at least comparing its stated hypothesis against the null hypothesis.)

Her criticism also that interesting scientific questions don't reduce to a single hypothesis also seems specious, although less obviously so. Her example,
"Can a parrot label objects?" may be a testable hypothesis, but actually isn't very interesting…what is interesting, for example, is how that labeling compares to the behavior of a young child, exactly what type of training might enable such learning and what type of training is useless, how far can such labeling transfer across exemplars, and….Well, you get the picture…the exciting part is a series of interrelated questions that arise and expand almost indefinitely.
decomposes a her "uninteresting" question to a number of interrelated hypotheses, each of which can be tested. She justly criticizes an absurd degree of "eliminative reductionism", but misses the target of hypothesis testing in general. Scientific theories are not themselves hypotheses, they are composed of hypotheses.

If Dr. Pepperberg is just now coming to the realization that the scientific method does not specify how to generate hypotheses, she has missed an important theme in philosophy beginning (at the latest) in 1974 with Pirsig's Zen and the Art of Motorcycle Maintenance. But again her criticism misses the mark. It's important to note that, while the generation of hypotheses is an interesting philosophical question, it has nothing to do with the scientific method. You may generate hypotheses any way you please, by intuition, bibliomancy, dreams, picking words out of a hat; you may even "simply to sit and observe and learn about one's subject before even attempting to devise a testable hypothesis." The scientific method just specifies that you have to express what you're testing in a particular form, a testable hypothesis, and then you have to test it. One can form a lot of potentially useful ideas from unbiased (or at least open-minded) observation, but to make it science — to turn these ideas into knowledge — you need to express your ideas as hypotheses, and test them.

It's not even possible (or particularly useful) to "observe [a] system without any preconceived notions," and it is not possible by such observation to actually gain knowledge. The first thing a good scientist should do after this sort of observation would be to figure out how the preconceived notions she wasn't previously aware of might have biased the observations. She has to assume it's possible that everything she thinks she then "knows" might be incorrect. She must then formulate testable hypotheses and test them.

To her credit, Dr. Pepperberg doesn't abandon the notion of hypothesis testing itself. She clearly describes reducing a scientific question to a collection of testable hypotheses, and clearly does not advocate simply stopping at unbiased observation as good enough in itself to generate real knowledge. At best, she has changed her mind only about an overly restrictive formulation of the scientific method employed far more often by philosophers seeking a straw man to demolish than actual working scientists.

Wednesday, January 02, 2008

Justifying the scientific method: Introduction

Central to the claim that science is the only useful epistemic method is the connection between private, individual knowledge and public knowledge. The scientific method is simple: Find the simplest explanation that accounts for the facts. There are four components to the scientific method: Facts, explanations, accounts and simplicity.

An explanation is any instance of manipulating symbols using some sort of deterministic method; ordinary logic is an instance of a deterministic symbol manipulation method. If you start with some set of axioms, and apply inference rules in a particular order, you will always reach the same conclusion.

An account (or an explanation that accounts for) is an explanation that differentiates between valid and invalid conclusions. Again, ordinary logic in an instance of account: we interpret a conclusion reached by applying inference rules to axioms as valid, and the inverse of that conclusion to be therefore invalid. Since we can derive "2+2=4", that conclusion is valid, and "it is not the case that 2+2=4", its inverse, is invalid.

Just having an explanatory method which generates accounts is not sufficient. We must relate the validity of the conclusions to some facts to discuss the notions of truth and falsity. Without such a relation, the sentence "2+2=4" is no more meaningful than "@=@+$". Arithmetic is equally "valid" even if we manipulate the axioms without any reference at all to our understanding of counting actual things in the real world; Arithmetic is true precisely to the extent that it provides an explanatory account of facts about how we actually count (some) things in the real world.

Simplicity just consists of counting the number of irreducible premises, axioms or stipulations, plus the number of inference rules required to determine the account.

I leave the definitions of "explanation" and "account" minimally defined precisely not to presuppose all of propositional calculus and other forms of mathematics and logical symbol manipulation. We have to find that propositional calculus is deterministic and does distinguish between valid and invalid. We can compare alternative methods on their simplicity. We also find that we can generate agreement with the facts by just by choosing axioms; we do not need to alter the fundamental method. For this reason, we usually take propositional calculus for granted except under esoteric circumstances.

The scientific method is fundamentally different from the deductivist method. The deductivist method says: Find those statements which are derivable from true premises.

I compare the two methods in more detail in my first article on the scientific method, The Failure of Deductivism. Briefly, we seem to have an insuperable problem finding true axioms from which we can derive anything useful. Premises simple enough to use deductively seem impossible to generate foundationally (without deduction), and statements we can generate foundationally (especially statements about perception) are too complex to deduce anything interesting from.

Probably the most important difference between the deductive and scientific methods are that the deductive method is truth-finding, and the scientific method is falsity-finding. We can never be sure that some explanation accounts for the facts not yet in evidence, and we can never be sure that the explanation really is the simplest. We can, however, be sure when some explanation fails to account for some facts that are in evidence. We must simply bite this bullet to employ the scientific method. (Besides the foundational problem, the deductivist must bite his own bullet: He cannot be sure that the inverse of his theorems are themselves non-theorems. Even with a perfect axiomatic foundation the deductivist can never be sure of falsity.)

The facts are statements uncontroversially accepted as true. This is a general definition; the meanings of "uncontroversial" and "accepted as true" depend on the level we are employing the scientific method.

Philosophers often blithely assert that the scientific method requires the importation of a considerable amount of metaphysical baggage, notably metaphysical realism, the presumption of consistency, the presumption of the reliability of the senses, and the a priori intelligibility of language. Science, or so these philosophers assert, is a complex socially constructed language game, and has little or nothing to do with the fundamental philosophical issues of epistemology and ontology. I believe such philosophers are mistaken, and they are mistaken because an enormous amount of fundamental scientific work has already been performed by evolution: We human beings come to the task of philosophy with cognitive tools that seem a priori but are actually posterior to at least five hundred million years of evolution. (The first nervous systems appear at least in the Cambrian, if not the late Proterozoic era; if we admit some degree of symbolic representation of the outside world to bacteria, we can go back four billion years.)

The justification for the scientific method as a fundamental epistemic method rests on four pillars: It is metaphysically sparse; we can, in theory, build up to our present-day panoply of knowledge by nothing more than conscious employment of the scientific method; we can show that the social construct of institutional science depends essentially on our sparse formulation of the scientific method; and we can explain our actual history by relating the scientific method directly to the process of evolution.

I will address each of these pillars in my next series of posts.

Thursday, December 20, 2007

Truth and falsity

What is truth?

Pontius Pilate asks this question (John 18:38), as have philosophers and intellectuals since the beginning of recorded history. And no one has been able to supply a satisfactory answer. To be honest, I don't know what's true; I don't even know what truth is. And neither do you. Nobody knows.

But I do know what falsity is. If you tell me that the sun will rise tomorrow over here, and it rises tomorrow over there, you're mistaken. I'm absolutely certain you're mistaken. I don't necessarily know where you're mistaken, but I'm certain that you're mistaken somewhere.

I can't just say that the opposite of mistaken is correct, because the law of the excluded middle doesn't always work in real life: there's mistaken, correct, and unsure. Even if you're not mistaken, you could be correct by accident or coincidence, or you could be fooling me, or you could be correct in a limited sense, only by virtue of making some approximation, but mistaken in the larger sense (as was Newton regarding gravitation).

This observation about how I have to construct my ideas about the world stands in stark contrast to canonical logic. As Gödel showed us, we can prove that we are correct that 2+2=4, and we can prove that we can prove ourselves correct (and prove that we can prove that we can prove... etc.), but we can't ever prove that some mistake really is a mistake: We can't prove that 2+2=5 is a mistake.

For this reason, I'm not just suspicious of, I'm positively dismissive of any attempt to prove anything about the real world using just logic. Logic is a terrific — indeed indispensable — tool, but it is just a tool. It doesn't get to the bottom of things at all, and we know it cannot. It works extremely well for speaking precisely, but it doesn't ensure we are speaking truthfully, indeed it cannot tell us at all if we are mistaken.

We didn't make very much progress in knowing things about the world until we stopped being hung up on knowing and accepting the truth and started being very careful about knowing and rejecting falsity. Genius is not about how quickly or accurately you can perform complicated intellectual tasks. Genius consists of looking at something that everybody knows, realizing that it's mistaken, and explaining to everyone else precisely why it's mistaken.

And the genius of those who developed The Scientific Method, from Galileo to Popper, is realizing that what everybody knew — that the search for knowledge was the search for truth — was mistaken! The search for knowledge is the search for falsity. Sherlock Holmes hit close to the mark when he said, "When you have eliminated the impossible, whatever remains, however improbable, must be the truth." Whatever remains might not be the "truth" — whatever that is — but that's what you run with. If tomorrow you find it's mistaken, you throw it out and you keep working.

Philosophers are, understandably, uncomfortable with this mode of reasoning. Too bad for philosophers. The scientific method is at least "self-referentially coherent": If the scientific method were mistaken, we would know it; that we have not yet found it mistaken, even after very thorough, hostile examination, is sufficient warrant to run with it. If we find out tomorrow it's mistaken, we'll look for something else; we'll cross that bridge when and if we come to it. For now, we've never proven the scientific method mistaken: Every mistake ever made by every scientist has been attributable to either not having enough evidence, not examining enough possible explanations, or using some method other than the scientific method.

And that's what bugs me about theists. I've never had any theist explain — and commit to — a method by which I might in principle discover that she is mistaken. Not one. Ever. I've seen a few theists float trial balloons, but careful examination has shown me that when I apply the method they propose, they are actually mistaken. And when I point this determination out, they withdraw the method. It doesn't have to be the scientific method; the scientific method is not assumed a priori. We don't have to use perceptual evidence that everyone (or most everyone) affirms. I personally am stumped for an alternative, but hey, I don't know that I'm correct, I know only that I'm not yet mistaken.

This is the problem with faith, theism, woo-woo and conspiracy theories in general. They may well be "true", whatever that means. But until I have some way of determining in principle that some idea might be mistaken, I just don't know how to think about it. I don't know how to find a way to believe it.

I have no method to determine in principle whether or not Christian theism is actually mistaken. The scientific method won't work: Christianity is compatible with everything I might observe, whether I actually observe it or not. By the same token, I have no method to determine whether Islam, or Hinduism, or Buddhism, or Norse theism (Odin, etc.) or any other woo-woo bullshit might be mistaken. These might all be better or worse literary metaphors, but they have nothing at all to do with truth and — more importantly — falsity.

This situation is very different, for instance, for evolution, or relativity, or quantum mechanics, or thermodynamics, or any other scientific theory: I do have a method, the application of method would detect a mistake, people have applied the method and found that these theories are not yet mistaken.

Skeptics, careful skeptics, look for falsifiability. We — like everyone else — have no clue what "truth" is, so we concentrate on detecting falsity. If we can detect falsity, and we don't actually detect it, well then, we have some basis for confidence. But if we cannot in principle detect falsity, we have no basis for confidence. Skeptics also have to bend over backwards to be honest: to have confidence in some idea, we have to assure ourselves that we have looked as hard as we can to find falsity, that we have considered every possibility and looked at all the evidence.

Furthermore, because we know we cannot actually evaluate every possibility, because we cannot examine all the evidence, we can never be absolutely certain that our ideas are not actually mistaken. All we can say is that our ideas are not yet mistaken, and our confidence in our ideas is warranted precisely to the extent that we have undertaken a thorough and honest search for mistake. And to have confidence by this warrant requires that it means something to discover a mistake.