Technically, a skeptic never actually

*believes*anything other than the evidence of her senses. If it is possible to believe or disbelieve some proposition, then she holds an epistemic belief about the probability or plausibility of some proposition; she never has a bare epistemic belief about the proposition itself. She does not believe that

*p*; she believes P(

*p*|E) =

*n*. (If she's a statistician then she has a belief about the probability of the probability, but a simple probability is good enough for practical purposes.)

So it is misleading to ask whether we should

*believe*some proposition on the basis of evidence. The better question should be: How should we

*act*given that we know the probability that some proposition is true? In some cases, we can just distribute our actions according to the probabilities (e.g. I can diversify my investment portfolio according to the probabilities that some companies will fail or succeed). But in many cases, we cannot distribute our actions: We must act

*as if*some proposition were definitely true or definitely false. We don't have to "believe" some proposition to act

*as if*it were true; we can be conscious and explicit about separating the choice of action from the evaluation of the probability.

I

*know*, for example, that there's a non-zero, finite probability that I will die in a fiery car crash on my way to work. But I have only two choices: act

*as if*I will definitely die (and stay home) or act

*as if*I will definitely not die (and drive to work). I can't 99.99% drive to work and 0.01% stay home. But I'm not deluding myself that I'm certain I won't die in a car crash, even though I'm acting as if I definitely would not.

Quite a lot of "weird" epistemic cases can be resolved simply by understanding that we don't actually need to believe definite, true/false propositions, but rather that we need to have an accurate understanding of the probability or plausibility of some proposition. The confusion comes from taking a metaphor for literal truth: If the belief that the probability is sufficiently high that some proposition is true, one's actions are indistinguishable in practice from being absolutely certain the proposition is true, and we talk about "believing" the proposition. But there's simply no specific, constant threshold where the belief that the probability of

*p*is indistinguishable from the belief that

*p*. The threshold varies according to the consequences of acting

*as if*

*p*were true or false.

An interesting side note related to what you posted, in machine learning when faced with a stochastic process if you happen to have accurate knowledge of the probability distributions then the optimal learned solution ends up being the same one that would be found in an entirely non-stochastic situation where replacing all distributions with their expected value. Since you cannot control the stochastic part of the problem, you simply equate all random aspects to their expected value. While people are not always optimally rational beings, we do a fairly good job. Of course, when you do not have completely accurate knowledge of the probability distributions things begin to get more complicated and you have to deal with estimation and all that other nasty stuff, but I thought it was interesting that machine learning deals with many of the same problems that epistemology does (though often from vastly different perspectives).

ReplyDelete