The scientific method is a procedure for explaining some set of facts, statements directly verifiable as true. These facts form the foundation of the scientific method. In white-coats-and-expensive-equipment science, these facts are repeatable, verifiable common statements about perceptions, but the scientific method can apply to any sort of verifiable facts, such as the construction of an individual's view of reality from her private subjective experience.
In the scientific method, we make up simple premises, deduce theorems from those premises which correspond to specific facts, and match the validity of those theorems to assent of the facts. If they match, good. If they don't match, we change something and try again.
The scientific method can even operate without any scientists, so long as there's some physical way that eliminates theories which are incompatible with the facts. Evolution, for instance, is a scientific process: variations (mutuations) of theoretic constructs (genomes, which entail organisms) either do or do not match the facts (either reproduce or fail to reproduce) and are thus retained or eliminated.
Everything else that scientists do are particular techniques to operate the scientific method efficiently. When a single experiment can cost $1,000,000 (or even $1,000), it pays to be very careful and efficient, to use controls, etc. But when "experiments" are performed a thousand times a day by billions of ordinary people (such as those experiences which underlie our rough idea of gravity or causation), we can afford to be less precise and rigorous.
Now I'd like to address some of the objections raised by my readers.
Quine has argued that no individual sentence has a distinctive verification or falsification condition, except relative to a mass of background theory against which observation takes place. All experimental observations are theory laden. If we were studying a astronomical phenomon, we would be working with observations made with telescopes (of one sort or another). However, in using such telescopes (and interpreting what we see through them) we assume an understanding, if only an intuitive one, of optics.
Timmo gets one detail wrong: when astronomers use telescopes, they assume not an intuitive but a very rigorous and scientific understanding of optics. But this scientific understanding is still theory-laden.
Theory-loading is a decisive objection to Logical Positivism's notion of Empiricism (which I will refer to from now on as Axiomatic Foundationalism"). If we are going to deduce theoretical knowledge from statements of experience, those statements--like mathematical axioms--have to be very simple and unequivocal. However, the meanings of our statements of experience are complicated and equivocal; just posing a question requires all sorts of assumptions and premises which are not justified by experience. Quine shows that Axiomatic Foundationalist Empiricism just can't get off the ground.
Quine's objection is important to the Popper's notion of falsificationism (which I'll refer to from now on as Evidentiary Foundationalism). It destroys Popper's First Mistake, that it is necessary for individual hypotheses to be falsifiable. It would really be terrific if individual statements were falsifiable: We could then then go through our theories line by line, and not only decide on the meaning of each statement individually, but also be able to test (attempt to falsify) each statement individually. If we could conduct science so efficiently, we'd have all the secrets of the universe nailed down by teatime Thursday. Sadly, Quine demolishes any notion of such marvelous efficiency.
But Quine's objection is not nearly so decisive or destructive for Evidentiary Foundationalism. The premises of Evidentiary Foundationalism are explicitly made up; they have no special epistemic status. Quine casts doubt on premises that are assumed to be doubtful. What matters to Evidentiary Foundationalism is not that the theorems are not theory-laden, but that the theory-loading is precise and rigorous. And the theory loading can be rigorous, since we can arbitrarily choose our premises, because we can choose premises that have the simple character of mathematical axioms from which we can construct rigorous deductions.
Similarly, Evidentiary Foundationalism does not depend on a precise understanding of the meaning of our natural language statements of perception. All it needs to do is predict assent or dissent to those statements. "Yes" and "no" are not theory-laden; even Quine allows us this much.
Timmo also notes:
Falsification can only tell you what not believe, but not what is true. Even a well-tested theory that has not yet been falsified may turn out to be wrong. It leaves the question: what makes it justified for you to believe it?
We can never absolutely confirm a theory. But we can measure with good precision how well a theory fits the evidence that has been evaluated. And we can even, to some extent, measure how well a theory has actually been tested.
This first measurement is not so complicated. We look at those theorems in a theory which correspond to statements of experience. We compare the size of the range of values in the theorem with the range of logically possible values which we might experience. For instance, if a theory entails "yes" where the logically possible experiences are "yes and no", then the theory is 50% permissive. If the theory predicts, "the voltmeter will read 16 V +/- 1 V," and the voltmeter can logically read anywhere from 0 to 100 volts, then the theory predicts 3 possible readings (15, 16 or 17) out of 100, and is therefore 3% permissive. Again, we are not worried about the details of how the theorems of the theory match up to statements of experience, we are simply concerned with being able to correlate the responses to the predictions.
We then perform a number of experiments. Every time our theory agrees with experiment, we multiply by its permissivity. Ten straight "yes" answers yields 0.5^10 = 0.001; ten straight 15-17 V readings yields .03^10 = 0.000000000000001. Subtract this number from 1.0 and you get how well the theory fits those ten facts.
Note the key part that falsificationism plays in figuring out how well a theory positively fits the facts: A theory that predicts "yes or no", a theory which predicts "50V +/- 50V" (i.e. unfalsifiable theories) has a permissivity of 100%. No matter how many experiments we perform 1 - 1^n = 0: The theory fails to fit the facts in the just the same sense that, while I might don William Howard Taft's overcoat, it could not justly be said that it "fit" me--or that I fit it.
As to what such a measure of "fitness" says about the truth of a theory, we'll have to venture into the murky swamps of metaphysics, which will be the subject of a future essay--which will address other of Timmo's more metaphysical concerns as well as the query about "parsimony".
 Taken literally, "Axiomatic Foundationalism" is redundant. Both terms literally mean, "accepted a priori as true." I want to differentiate the direction of deduction. Axiomatic Foundationalism means treating the foundation (the a priori true statements) like mathematical axioms, such as the axioms (premises) of Peano's Arithmetic in part 2. Evidentiary Foundationalism, on the other hand, means treating the foundation like derived theorems, and hypothesizing the axioms from which we could deduce those theorems.
 I apologize in advance if I'm unfairly maligning Popper. I'm a lousy, lazy, and utterly slack scholar. Still, I've seen enough people, far better scholars than me, consider individual falsification crucial to Popper's epistemology that I feel justified in attributing the mistake to Popper himself.
 The second measurement, power analysis, measures how "thoroughly" a theory has been tested.
 I'm vastly oversimplifying here, skipping over volumes of statistical theory. I offer what I think is (given a suitably charitable interpretation) the essential "flavor" without any gross inaccuracy.