Saturday, November 22, 2008

Reliabilism part 1

Stephen Law talks about reliabilism and externalist theories of knowledge.
a knows that P iff:
(i) a believes that P
(ii) P is true
(iii) a's belief that P is produced by the state of affairs P via a reliable mechanism

This is a simple RELIABILIST theory of knowledge.

Suppose your senses are reliable mechanisms for producing true beliefs. If there's an orange on the table in front of you, your eyes etc. will cause you to believe there's an orange there. Remove the orange, and that will cause you to stop believing there's an orange there. Because your senses are fairly reliable belief-producing mechanisms, your beliefs "track the truth" in a fairly reliable way.

If that is the case, then you can be a knower. You can know there's an orange on the table in front of you. You can know this, despite not inferring that the orange is there from evidence. You simply, directly, know. Non-inferentially. And, indeed, without justification (unless you want to redefine justification so that being in this situation counts as "being justified").

Moreover, add some philosophers, it is pretty reasonable for you to believe there is an orange there if that is how it directly seems to you.

So you can have a belief, unsupported by any inference, unjustified, yet nevertheless qualifying as both reasonable and (if the reliable mechanism is doing its stuff) knowledge.

First, the idea that "it is pretty reasonable for you to believe there is an orange there if that is how it directly seems to you" seems to employ a very imprecise and vague sense of "reasonable", which literally means, "supported by reasoning", a conscious, explicit, formal and precise mechanism. I think it would be more precise for those philosophers to say it might be permissible for you to believe there is an orange there if that is how it directly seems to you. To make the belief reasonable, you should supply actual reasons.

Of course, "It seems there is an orange there," might itself a reason to believe, "There really is an orange there." But to make the former an actual reason requires an enthymeme: "If it seems that X, then it really is that X." "Seeming" is taken a priori to be a reliable mechanism for determining truth.

One can actually go pretty far with such a simplistic theory. If you don't look too deeply at the world — if, when this simplistic theory produces weird results, one says, "Hm, that's weird," and avoids similar situations — an actual human being can live his whole life without ever seriously challenging this sort of naive realism.

But of course scientists and philosophers do want to look more deeply at the world.

This "seeming entails being" sort of naive realism might really be true. It might really be the case that when you put a pencil halfway in a bowl of water, the water actually does bends the pencil. It might really be the case that putting on rose-colored glasses really does change all the physical properties of the world making them appear rose-colored. We can preserve the statement, "If it seems that X, it really is that X," come what may. We would end up with a lot of universal statements that appear highly weird to us, but we can build a structure around naive realism that is both logically consistent and empirically correct.

There are two problems, though, with accepting "seeming entails being".

The first is that on the one hand, we accepted "seeming entails being" because it's intuitively appealing. Most of the time, we believe that things exist for no better reason than they seem to exist. On the other hand, accepting this intuitive concept at face value leads to counter-intuitive results: It is not intuitively appealing to believe that water really does bend pencils. If we accept "seeming entails being" because it is intuitively appealing, then by what virtue do we accept "seeming entails being" despite drawing intuitively unappealing conclusions?

The other problem is just that we end up with a lot of universals; our "laws of physics" are extremely complicated. So complicated, in fact, that it becomes impossible in practice to learn even a small fraction of them. For example, when I put on some specific pair of glasses, the world changes in such a way that I see what we modern humans would call a "distortion" (because of flaws in the optical properties of the glass). If I accept "seeming entails being", then I have to create a whole set of "laws of physics" that apply only when wearing those specific glasses. Also, we again have a intuitively unappealing result (a very complicated set of physical laws) justified foundationally by an appeal to intuition.

Perhaps probabilism and reliabilism can help us get out. Perhaps it's not the case that seeming entails being, but that seeming usually (or probably) implies being. Instead of, "If it seems that X, then it really is that X," we say, "If it seems that X, then it usually/probably really is that X, but sometimes not." But this sort of "bare" reliabilism (where we don't have a rigorous or precise definition of "usually" or "probably") has its own problems.

Looked at in binary true/false logic, bare reliabilism becomes vacuous; it reduces to "If it seems that X, then it really is that X or it's not really that X." We are no longer saying anything about the world; we're just expressing one of the valid (and more boring) derivation rules of formal logic. "If it seems that the moon is made of green cheese, than the moon really is made of green cheese or the moon isn't really made of green cheese." Valid, yes, but vacuous and boring.

Stating the rule in probabilistic terms doesn't really help us. We can say, "If it seems that there's an orange then the probability that there really is an orange is 51%; if the probability that some statement is true is >= 51%, it is justified to believe that statement is true." But what does this get us? If we embed a constant probability into our statement then we are saying "If seems X, then probably X, therefore we believe X." But we're saying nothing different than we were with seeming entails being except adding the inconsequential provsio, "I may be wrong but I'm sure." If we embed a conditional probability (if it seems that X, then the probability it really is X is x; if it seems that Y, the probability it really is Y is y") but we don't know the actual probabilities, we're just back to the vacuous "If it seems that X, it might really be that X, or it might not really be that X."

Invoking a "reliable mechanism" without telling us precisely what "reliable mechanism" means in a deeper sense than "usually produces true beliefs" just moves the mystery around. We must give "reliable mechanism" more semantic content to make it do any philosophical work. I'll talk about this additional semantic content in part 2.

3 comments:

  1. It seems to me that knowledge has to be at least to some extent socially constructed. That is, I trust my senses because the vast majority of the time, other people behave as if they are perceiving the same things in the same way that I am. Isn't this why when people are deprived of all social contact--such as extended time in solitary confinement--they begin to hallucinate and eventually go completely insane, losing their ability to trust their own perceptions?

    ReplyDelete
  2. There's a very deep difference between "socially constructed" and "requiring a society to construct". Confusing these concepts has caused no end of trouble in philosophy.

    A socially constructed idea is an idea that refers only to things in people's heads, an idea that has no objective referent.

    However, just because it is efficient to a society to construct scientific ideas does not mean that those scientific ideas have no objective referent.

    ReplyDelete
  3. ...just because it is efficient to a society to construct scientific ideas...

    ...just because it is efficient to use a society to construct scientific ideas...

    ReplyDelete

Please pick a handle or moniker for your comment. It's much easier to address someone by a name or pseudonym than simply "hey you". I have the option of requiring a "hard" identity, but I don't want to turn that on... yet.

With few exceptions, I will not respond or reply to anonymous comments, and I may delete them. I keep a copy of all comments; if you want the text of your comment to repost with something vaguely resembling an identity, email me.

No spam, pr0n, commercial advertising, insanity, lies, repetition or off-topic comments. Creationists, Global Warming deniers, anti-vaxers, Randians, and Libertarians are automatically presumed to be idiots; Christians and Muslims might get the benefit of the doubt, if I'm in a good mood.

See the Debate Flowchart for some basic rules.

Sourced factual corrections are always published and acknowledged.

I will respond or not respond to comments as the mood takes me. See my latest comment policy for details. I am not a pseudonomous-American: my real name is Larry.

Comments may be moderated from time to time. When I do moderate comments, anonymous comments are far more likely to be rejected.

I've already answered some typical comments.

I have jqMath enabled for the blog. If you have a dollar sign (\$) in your comment, put a \\ in front of it: \\\$, unless you want to include a formula in your comment.

Note: Only a member of this blog may post a comment.