The chief question of any moral philosophy is its epistemic basis. How do we know moral statements? This question resolves to asking, what is the foundation of moral philosophy, and what kind of foundation is it?
Briefly, there are two kinds of foundations: axiomatic and hypothetical. An axiomatc foundation is one with premises (or a rule for constructing premises) accepted as axioms, i.e. true by definition, and all theorems, i.e. statements that deductively follow from those axioms according to inference rules, are true. Most (but not all) mathematics uses axiomatic foundations. The axioms of set theory, for example, are true by definition, and all theorems, however surprising or counterintuitive, are true. The "game" of mathematics is to try to derive a lot of "interesting" theorems, especially theorems that explain our experience, with as few axioms as possible. Set theory, on that basis, is wildly successful.
A hypothetical foundation, on the other hand, takes a collection of theorems (and nontheorems), i.e. complex statements, as true (or false) by definition. The project under an empirical foundation is to construct the smallest set of hypotheses and inference rules from which we can derive the truth of the foundational theorems (and the falsity of the foundational nontheorems). Natural science is the paradigm of hypothetical foundationalism: statements about experience are taken as true by definition, and our scientific theories are collections of hypotheses from which we can derive those statements of experience.
In an axiomatic foundation, the truth of the theorems is the same "kind" of truth as the axioms: we cannot consistently call a theorem false without rejecting one or more axioms (or inference rules). In a hypothetical foundation, however, hypotheses are never true in the same sense as the foundational theorems: we can reject a hypothesis, regardless of its explanatory power, without rejecting the foundational theorems, especially if we discover a new foundational theorem or nontheorem that the hypothesis contradicts. It is, in a sense, impossible to be mistaken if we apply the inference rules accurately about the truth of theorems in an axiomatic system. In contrast, even if we apply the rules accurately, we can be mistaken about the truth of a hypothesis.
We have the same choice in moral reasoning: what do we take as our foundation? If we want to use an axiomatic foundation, then as we see in mathematics, we have a lot of choice in our axioms. In mathematics, this choice is unproblematic, we can just say that each choice of axioms is a different branch of mathematics. We really don't care, for example, if the parallel postulate is "really" true or "really" false; we just call the axiom set where it's true by definition, "Euclidian (plane) geometry," and call the axiom sets where it's not true by definition, different kinds of "non-Euclidian geometry." In moral philosophy, however, this choice is more problematic: a moral philosophy is, by definition, normative, it should tell us what to actually do, and we can do only one thing at a time. So if I can derive from one set of moral axioms that I should pay my taxes, and I can derive from another set that I should not pay them, I am no better off deciding whether I should or should not pay my taxes than I was before I studied moral philosophy; and I cannot both pay and not pay my taxes. It's pretty clear (at least to me) that trying to base moral philosophy on an axiomatic foundation is a dead end.
Most philosophers (including Nozick) implicitly or explicitly use moral intuition as a hypothetical foundation. They try to set up moral hypotheses that "capture" our moral intuitions; especially, they try to show that some alternative moral theory contradicts our moral intuition, and is therefore false. The problem with this foundation is that moral philosophy is still non-normative: our intuitions themselves are normative; our moral philosophy merely describes, in a more-or-less compact way, our foundational moral intutions. Moral philosophy, on an intuitive level, is descriptive. Indeed it is a rather blatant contradiction to say that moral philosophy X explains and describes some subset of moral intuitions, and therefore should be accepted, and then say that X contradicts certain other moral intuitions, and we should therefore reject those other intuitions. (This fundamental contradiction is so pervasive, and perhaps unavoidable, in academic moral philosophy that I simply abandoned the idea of pursuing philosophy as a career.)
(There is another mostly unrelated problem in using moral intuition as a hypothetical foundation: how robust should our moral theories be to counterfactuals? Should we reject a moral theory because it contradicts our intuition about states of affairs that appear physically impossible? Such is one rebuttal to the Omelas problem: It seems not only physically impossible in this world that the torture and deprivation of a single child was necessary to produce a society where everyone else was happy, but also that the laws of physics could be such that it was necessary but everything else was the same as this world. The inapplicability of my intutions to such a wild counterfactual seems completely irrelevant; Omelas is problematic only to the inhabitants of that world; it says nothing at all about this actual world. Similarly is the rebuttal to a lot of hypotheticals so beloved by many moral philosophers such as the Trolley Problem (parodied masterfully in Can Bad Men Make Good Brains do Bad Things?) that assume perfect knowledge or absolute certainty that is physically impossible.)
The only answer, it seems, is to bite the bullet: moral philosophy is not normative. A person either chooses the moral axioms she likes and acts according to the theorems derivable from those axioms, or examines her moral intuition and tries to construct a philosophy that explains those intuitions. In the second case, moral philosophy can at least provide guidance when one's immediate moral intuition is unclear, or one feels some sort of contradiction or incompatibility between one's moral intuitions or between intuition and desire.
Furthermore, moral intuition is labile in a sense that (perceptual) experience, the foundation of natural science, is stabile, or at least fundamentally and qualitatively less labile. It is very easy to change one's moral intuition in a way that it is very difficult to change one's perceptions. Perceptions do change, for a number of reasons, but it's still hard to see a tree in my front yard and not see the trees in my back yard. In contrast, people can go from treating Jewish prisoners in concentration camps as human beings to treating them as worse than animals in as little as a few days, and usually a month or two (see Becoming Evil by James Waller).
Moral philosophy is a fundamentally dialectical process, a process that can be divorced neither from this real physical world nor from the society that the moral philosopher thinks and lives in. Moral philosophy emerges from the contradictions between the individual and society and reality.
Engaging in that dialectic is my project as a moral philosopher. I don't claim any transcendent or universal truth for utilitarianism. Because of who I am, my biological evolution, the society in which I was raised, how I was brought up as a child, and my experiences as an adult, I have certain intuitions and feelings about good and bad, preferences about how I want to live my own life, and preferences about the society and culture I both happen to live in now and that I would like to live in tomorrow. Utilitarianism captures a lot of those intuitions, helps me examine those intuitions critically, gives me guidance about what I want to do when my intuitions seem ambiguous and equivocal. I can ask no more from philosophy.