II. Inconsistencies of Application
[Some months ago I wrote, and began to neglect, a series of posts entitled "The Pathology Problem," about ethics, neurological development, moral realism, and the implications for theism. This post continues that extended essay.]
Thanks to the good folks at Harvard University’s Cognitive Evolution Laboratory, you can take the same Moral Sense Tests that I referenced in discussing a recent study that found a neurological basis for ethical decision making. The first test is ostensibly a measure of one’s feelings of compassion and punishment – how much should someone who behaves negligently or carelessly be personally fined in the event of an accident? I found it a far more interesting test on one’s feelings about personal responsibility. The average financial award for damages is $72,500. My average was $750 (indeed, I awarded damages in only two of the eight scenarios where I felt the description indicated carelessness or negligence, as opposed to simple stupidity or understandable ignorance), based on what my assumption of that person’s medical care would be (in retrospect, I would factor in lost wages as well and significantly lowballed one scenario). Though the authors take care to state they are casting no moral judgments by giving you the averages, one cannot help but feel that, in my case, one is a low-empathy person out of step with the average test-taker.
The second test is far more fascinating. It is a measure of the acceptability of utilitarian judgments, but with twists: the test throws in variability of direct personal responsibility for a death versus making a "damned if you do, damned if you don’t" choice; distinguishes scenarios by involving one’s own children or fetuses or strangers; and one’s obligations to complete agreed upon contracts . In the study referenced in Part I, the experimental subjects – all of whom had damage to a part of the brain that controls empathy – consistently made far more utilitarian judgments. You are asked to judge each scenario’s subject’s responses on a 7-point scale from Forbidden to Obligatory, with Permissible denoting the middle point. I found that I was far more willing to make strictly utilitarian judgments when the situation involved strangers with no obligation taken upon myself. When it comes to driving my boat to save five people from a shark and letting one person drown, I came down very close to obligatorily saving the five. Given a similar scenario but wherein I pull a lever that kills one person but saves five, I decided the action was permissible but less obligatory.
However, when the scenario involved one’s own or another’s child, or helpless individuals, I became far less utilitarian. It was, I decided, absolutely forbidden to smother one’s own crying child to save the people hiding with you from enemy soldiers. It was forbidden to abort a two-month (or five-month) old fetus you had agreed to carry to term for a couple unable to conceive. Similarly, in the questions where you agreed to act as someone’s kidney – how I don’t know; it’s hypothetical – and then decided to renege before the time necessary to save the individual’s life was up was similarly forbidden. But – and this threw me for a loop – it was more permissible, though still on the "forbidden" side of the scale – for a woman to drive past another person with a severe but not life-threatening injury. The genius of the test is that it doesn’t let you stop and think about the permissibility of an action; once you click your choice it cycles through to the next randomly selected question, thus evaluating your basic moral instincts.
As one might expect from Part I, my instinctual reactions – the impermissibility of harming or allowing to be harmed one’s own children versus letting a stranger die – indicated an empathic link to moral decision making. As with almost all things in human psychology, this instinctual behavior could theoretically be overridden with the application of a learned decision-making schema, such as that imposed by religious education. Such is the genius of the human mind; alone of the animals, we have the power to override our instinctual responses. (Interestingly enough, the ability to apply learned behavior to context-specific situations is, apparently, no longer thought to be the sole province of the primate brain: recent research shows that canines apply context-specific decisions and learned responses as well.)
As neuro-psychology develops, I predict that we will see the ethical ramifications of our instinctual and learned cognitive schema begin to not just infringe, but actively supplant the primacy of philosophy and theology in moral theory. In the next installment, I will explore how currently known mental illness with neurological underpinnings impact our classic understanding of moral realism.