Friday, January 04, 2008

The oracle

A mysterious stranger hands you an "oracle", a black box with a keyboard and a screen. He tells you that the box has knowledge, and that it will answer any questions put to it in English according to that knowledge. Unfortunately, you can't open the box or otherwise examine it inside. He also tells you that the box isn't omniscient — it doesn't know everything — and it definitely isn't smart enough to do philosophy. If you ask it "How do you know that thus-and-such?" it will merely reply, "I just know," or, "I don't know how I know," or, "I don't understand your question." The mysterious stranger then scarpers off to parts unknown.

If you are committed to the standard that knowledge is by definition "justified true belief", you are absolutely incapable of determining whether the box does or does not have any knowledge, because you cannot tell how it answers your questions: you cannot determine whether its answers are (properly) justified. (It does not seem that one can consider the claims of a mysterious stranger to be any kind of acceptable meta-justification.)

It seems intuitively obvious that one shouldn't assume even an attitude of unconquerable agnosticism about our supposed oracle, even though we can't ever know anything about the details of its justification.

We might discover that knowledge requires a certain kind of justification, but should we consider justification definitional?

14 comments:

  1. If the box does not have justifications, it doesn't have knowledge. If it's not smart enough to do even basic philosophy like this, it's not a thinking thing, and so it doesn't have knowledge. It may be a perfectly reliable source of information, though, so you might - after a suitable period of testing, mysterious strangers being what they are - be able to gain knowledge yourself by asking the box questions.

    Think about it this way: Replace the talking box with a huge database of true propositions. Not all true propositions are included in the database, but all included propositions are true. The database doesn't know those propositions, does it?

    ReplyDelete
  2. Sigh. The oracle is not presented as a thinking being, and this thought experiment is not designed to explore our notions of consciousness.

    The oracle is explicitly presented as allegedly having knowledge, in the same sense that a book might have knowledge; subsequent usage discussing what the oracle "knows" should clearly be read idiomatically.

    While the rhetorical technique of throwing sand in the bulls eyes to avoid the horns of a dilemma has its uses, its exclusive employment becomes tiresome.

    It may be a perfectly reliable source of information, though, so you might - after a suitable period of testing, mysterious strangers being what they are - be able to gain knowledge yourself by asking the box questions.

    But this is precisely the issue: no matter how well you test the box, the best you can do is determine its "reliability" — you can never examine the justification for any of its information. Such a view argues that reliability (in some, unexplained sense), not justification, definitionally establishes knowledge. (As noted in the OP, we might discover that some particular kind of justification is required to establish reliability, but justification then is not strictly definitional.)

    Replace the talking box with a huge database of true propositions.

    Establishing truth by fiat does not seem helpful. We have a database of propositions. Full stop. How do we know those propositions are true?

    ReplyDelete
  3. The oracle is explicitly presented as allegedly having knowledge, in the same sense that a book might have knowledge; subsequent usage discussing what the oracle "knows" should clearly be read idiomatically.

    A book might be said to transmit knowledge, I suppose, but to "have" knowledge a subject must be a being capable of having intentional states. Yes, it's nitpicking. But it's important nitpicking, because sloppiness here would render the discussion completely unproductive.

    But this is precisely the issue: no matter how well you test the box, the best you can do is determine its "reliability" — you can never examine the justification for any of its information.

    Well, that's the thing. You can gain knowledge from a reliable source - "testimony from a reliable source" counts as justification - without knowing that source's justifications. So if there's a competent scientist who tells us that table salt is a cubic crystal composed of sodium and chlorine atoms, we can come to know this without needing to spend the time trying to crack salt into its component parts.

    Such a view argues that reliability (in some, unexplained sense), not justification, definitionally establishes knowledge.

    More accurately, as I said above, it argues that testimony from a reliable source is one sort of justification.

    Establishing truth by fiat does not seem helpful. We have a database of propositions. Full stop. How do we know those propositions are true?

    Um. The same way the oracle's claims are true? If the box contains knowledge, then the box contains true statements, because nobody can ever know anything that is false.

    ReplyDelete
  4. Yes, it's nitpicking. But it's important nitpicking, because sloppiness here would render the discussion completely unproductive.

    It has not been established, at least in my mind, that our discussion is yet productive.

    I'm trying to see if we can talk about knowledge independently of justification. Consciousness, intentional states, etc. are simply irrelevant.

    Well, that's the thing. You can gain knowledge from a reliable source - "testimony from a reliable source" counts as justification

    It's nice to know that you can simple pronounce what counts as justification by fiat. But you still haven't spoken about what counts as "reliable". If the definition of reliability is "according to the right sort of justification", then your argument becomes viciously circular.

    Um. The same way the oracle's claims are true?

    I myself did not set up the thought experiment with the presumption that the oracles claims are true.

    ReplyDelete
  5. Erik, our conversations are simply not productive. You duck and weave, look for nits to pick, avoid the central issues, pontificate on philosophical dogma and simply refuse to deal directly with the issues that I raise.

    One of the reasons that I don't bother to get a philosophy degree and publish in philosophy journals is that I find your sort of dishonest bullshit mental masturbation masquerading as "argumentation" to be wearying and entirely unproductive.

    Please go bother someone who's actually interested in what you have to say.

    ReplyDelete
  6. I'm trying to see if we can talk about knowledge independently of justification. Consciousness, intentional states, etc. are simply irrelevant.
    The distinction between information (or data) and knowledge is that knowledge is what is [claimed to be] known by an agent. The modality of communication is usually unimportant: if X asserts that she knows P, it is irrelevant to be whether she communicates this in speech, writing, or smoke signals. It is the agent who performs the communicative action that we take to "have knowledge"; not the sound waves, or printed text, or smoke clouds.

    This should not be taken to mean that there are clean categories of knowledge-possessing and information-transmitting entities. That kind of thinking gives rise to unfortunate notions like "primary intention" and "derived intention", which are completely bogus. Justification is necessarily heuristic and contextual. Nor is a limited anthropomorphism inappropriate; I have no trouble with saying that a chess program "knows" that it is in check.

    You say, "The oracle is explicitly presented as allegedly having knowledge, in the same sense that a book might have knowledge". I think that most people do distinguish between a book containing the knowledge of another (the author), and an agent having knowledge.

    ReplyDelete
  7. You're over-thinking this, Geoff.

    The point is that we cannot tell how the oracle is justifying its statements. For all we know, there might be an omniscient, conscious elf in the box, it might connect to the internet and read books, it might examine the position of the stars, or calculate the eight-skitty-zillionth digit of pi and convert to ASCII or whatever.

    I'd just use a book for the example, except that we typically know (or intuitively believe we know) the provenance of a book: They're typically human beings, and we can (in some sense) examine the justification for their statements.

    I'm asking whether we're committed to specifically (the right sort of) justification as essential and definitional.

    If the justification is definitional, there's simply (as Erik asserts) no way we can use "knowledge" in any sense with the output of the oracle.

    We can, of course, examine the oracle to see if its statements are reliable (in some sense), and justify our own beliefs in terms of the oracle, but in that case we're using (some sense of) reliability to establish justification; justification is no longer definitional.

    (Of course, one could redefine "justification" to vacuity or near-vacuity, but that doesn't help sharpen the definition.)

    ReplyDelete
  8. I think that the problem we're dealing with is, simply, whose knowledge are we talking about? A proposition is not knowledge simpliciter, it is (definitionally!) a state of an agent that is capable of justifying its states - i.e. explaining them in terms of other states.

    You say "The point is that we cannot tell how the oracle is justifying its statements." But before we even get to that point, we must first recognize that we may not be able to tell whether the oracle is justifying its statements or not. Consider a Customer Information desk at an airline. You phone them, and the agent tells you the arrival time of a flight. Is the agent making it up, reading it out of the timetable, interrogating a flight information system? Does the agent believe the information or not? Without further information, you can't tell. Is it knowledge? Maybe, maybe not.

    I think that when it comes to reasoning about other minds (including what kinds of beliefs, justified or otherwise, those minds contain) we are necessarily constrained to a functionalist - even behaviorist - stance. We infer the internal status of a proposition P by the various ways that the agent communicates about P ("I know P", "I believe P", "I tend to think P") and by utterances and actions which are consistent with different kinds of status. (If P is "it is raining", do you pick up your umbrella before going, out, or look out of the window first?)

    This kind of ambiguity about the word "knowledge" crops up all over the place. Inevitably, I'm reminded of Searle's "Chinese Room". How can the room be said to "know" Chinese if the script-following man in the box does not?

    ReplyDelete
  9. Ambiguity does not creep in to discussions of knowledge, it has pitched a tent and set up camp.

    But you're still over-thinking it.

    The question is: Are there circumstances where you think you can talk about "knowledge" in some sense, even if you take away only the ability to examine the justification for that supposed knowledge?

    Make the oracle a space alien, a book with an unknown author, a mutation that supplies unjustified answers in our own brain, or even God actually speaking to us: Can we talk about knowledge in any sense independently of its justification?

    I'm trying to get to the point Erik touched, and then veered off: some notion of reliability seems important, more important perhaps than justification.

    However...

    Justification does seem important in some sense: Even if the oracle answered a billion questions exactly correctly, I would still be skeptical if it opined sans proof that Goldbach's conjecture were definitely true (or a theorem) or false; just as skeptical as if that were its first answer.

    (One reason I ask these sorts of questions is to skeptically examine and clarify my own thinking too.)

    I don't necessarily want to see a specific justification; I do want to see any sort of deterministic, reproducible justification.

    ReplyDelete
  10. I also think that Searle is bullshitting us in a deep sense with the Chinese room. (Fun fact: my (recently ex-) boss studied under Searle at Berkeley.) He's drawn a big blinking arrow at where we think the knowledge ought to reside (but ex hypothesi does not), which distracts us from considering alternatives. Take the man out of the room and substitute a machine that does the man's rather dull job, and we're not so distracted.

    ReplyDelete
  11. "Can we talk about knowledge in any sense independently of its justification?"

    If you put it that simply, no. We cannot talk about knowledge independently of the knower - or of the knower's justification. (Two different knowers may justify the same proposition in quite different ways; the justification adheres to the [knower, knowledge] pair, not to [knowledge].)

    ReplyDelete
  12. Why can't we talk about knowledge without talking about justification?

    I'm not sure how to interpret your parenthetical comment. If two knowers can justify the same proposition in different ways, doesn't that imply that we can indeed talk about knowledge without at least a specific kind of justification?

    Keep in mind that even when accepting justification as definitional, it seems clear that its at least implied that one is talking about the right kind of justification. "... because it says so in the Bible" is a kind justification, but you and I at least might dissent that it is the right of justification.

    ReplyDelete
  13. "I'm not sure how to interpret your parenthetical comment. If two knowers can justify the same proposition in different ways, doesn't that imply that we can indeed talk about knowledge without at least a specific kind of justification?"
    Consider this example. Bob and Alice are playing a card game, and towards the end of the game Bob is holding a single card which Alice has not seen.

    Bob knows that his card is the ace of hearts; he can justify this fairly easily. All agree that Bob knows what his card is.

    Alice has been counting the fall of cards during the game. Based on her counting, Alice believes that Bob holds the ace of hearts.

    What Alice doesn't know is that Bob tampered with the deck before they started to play. As a result, Alice's counting was flawed. If she had been aware of the true state of the deck, she would realize that Bob could have any one of several different cards (including the ace of hearts).

    So does Alice know that Bob has the ace of hearts? She declares her belief, and states her justification. To a naive observer, who believes that the deck was true, the answer is yes. He can peek in Bob's hand, see that the card is indeed the ace of hearts, and thus Alice's statement fits our definition of knowledge.

    To Bob, the answer is no: Alice does not KNOW what card he holds, because her belief is unjustified. It was based on bad data. The fact that her assertion is true is irrelevant.

    ReplyDelete
  14. Geoff, I think your example still undermine justification as specifically definitional. If the deck is fair, Alice does know on her justification. If Bob tampered with the deck, Alice doesn't know on the same justification. If Charlie (unbeknownst to Bob) restored a fair deck, Alice does know again, again on the very same justification.

    As long as we're using our intuition as a guide, I'd have to say that the flip-flopping conclusions on a supposedly definitional characteristic does not match my intuition of a definitional characteristic.

    (I'll say again that it seems uncontroversial that we can draw conclusions about the relationship of justification to knowledge.)

    I want to address the issue of the definition of knowledge head on, instead of sneaking up to it as I have in this post. The ambiguity and fuzziness of the definition is even more pronounced than I originally anticipated.

    ReplyDelete

Please pick a handle or moniker for your comment. It's much easier to address someone by a name or pseudonym than simply "hey you". I have the option of requiring a "hard" identity, but I don't want to turn that on... yet.

With few exceptions, I will not respond or reply to anonymous comments, and I may delete them. I keep a copy of all comments; if you want the text of your comment to repost with something vaguely resembling an identity, email me.

No spam, pr0n, commercial advertising, insanity, lies, repetition or off-topic comments. Creationists, Global Warming deniers, anti-vaxers, Randians, and Libertarians are automatically presumed to be idiots; Christians and Muslims might get the benefit of the doubt, if I'm in a good mood.

See the Debate Flowchart for some basic rules.

Sourced factual corrections are always published and acknowledged.

I will respond or not respond to comments as the mood takes me. See my latest comment policy for details. I am not a pseudonomous-American: my real name is Larry.

Comments may be moderated from time to time. When I do moderate comments, anonymous comments are far more likely to be rejected.

I've already answered some typical comments.

I have jqMath enabled for the blog. If you have a dollar sign (\$) in your comment, put a \\ in front of it: \\\$, unless you want to include a formula in your comment.