A mysterious stranger hands you an "oracle", a black box with a keyboard and a screen. He tells you that the box has knowledge, and that it will answer any questions put to it in English according to that knowledge. Unfortunately, you can't open the box or otherwise examine it inside. He also tells you that the box isn't omniscient — it doesn't know everything — and it definitely isn't smart enough to do philosophy. If you ask it "How do you know that thus-and-such?" it will merely reply, "I just know," or, "I don't know how I know," or, "I don't understand your question." The mysterious stranger then scarpers off to parts unknown.
If you are committed to the standard that knowledge is by definition "justified true belief", you are absolutely incapable of determining whether the box does or does not have any knowledge, because you cannot tell how it answers your questions: you cannot determine whether its answers are (properly) justified. (It does not seem that one can consider the claims of a mysterious stranger to be any kind of acceptable meta-justification.)
It seems intuitively obvious that one shouldn't assume even an attitude of unconquerable agnosticism about our supposed oracle, even though we can't ever know anything about the details of its justification.
We might discover that knowledge requires a certain kind of justification, but should we consider justification definitional?