Joshua Greene

Resources

The New Science of Morality

Beyond Point-and-Shoot Morality


Overview: From neural ‘is’ to moral ‘ought’

The extensive quotes below provide an overview of Joshua Greene’s paper ‘From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?’:

  • Like many philosophers, Greene maintains the conventional distinction between ‘is’ and ‘ought’.
  • but unlike many philosophers, he thinks that neuroscientific knowledge does have a bearing on morality, and a profound one at that.

Many moral philosophers regard scientific research as irrelevant to their work because science deals with what is the case, whereas ethics deals with what ought to be. Some ethicists question this is/ought distinction, arguing that science and normative ethics are continuous and that ethics might someday be regarded as a natural social science. I agree with traditional ethicists that there is a sharp and crucial distinction between the ‘is’ of science and the ‘ought’ of ethics, but maintain nonetheless that science, and neuroscience in particular, can have profound ethical implications by providing us with information that will prompt us to re-evaluate our moral values and our conceptions of morality.

Many moral philosophers boast a well cultivated indifference to research in moral psychology. This is regrettable, but not entirely groundless. Philosophers have long recognized that facts concerning how people actually think or act do not imply facts about how people ought to think or act, at least not in any straightforward way. This principle is summarized by the Humean dictum that one can’t derive an ‘ought’ from an ‘is’. In a similar vein, moral philosophers since Moore have taken pains to avoid the ‘naturalistic fallacy’.

I am sceptical of naturalized ethics for the usual Humean and Moorean reasons.

…others view science and normative ethics as continuous and are therefore interested in normative moral theories that resemble or are ‘consilient’ with theories of moral psychology. Their aim is to find theories of right and wrong that in some sense match natural human practice. By contrast, I view science as offering a ‘behind the scenes’ look at human morality. Just as a well-researched biography can, depending on what it reveals, boost or deflate one’s esteem for its subject, the scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it.

Greene is interested in our moral intuitions:

There is a growing consensus that moral judgements are based largely on intuition — ‘gut feelings’ about what is right or wrong … Sometimes these intuitions conflict, both within and between individuals.

He contrasts two moral dilemmas (both due to  Peter Unger):

  1. You are driving along a country road when you hear a plea for help coming from some roadside bushes. You pull over and encounter a man whose legs are covered with blood. The man explains that he has had an accident while hiking and asks you to take him to a nearby hospital. Your initial inclination is to help this man, who will probably lose his leg if he does not get to the hospital soon. However, if you give this man a lift, his blood will ruin the leather upholstery of your car. Is it appropriate for you to leave this man by the side of the road in order to preserve your leather upholstery?

  1. You are at home one day when the mail arrives. You receive a letter from a reputable international aid organization. The letter asks you to make a donation of two hundred dollars to their organization. The letter explains that a two-hundred-dollar donation will allow this organization to provide needed medical attention to some poor people in another part of the world. Is it appropriate for you to not make a donation to this organization in order to save money?

Most people think there is a difference between these scenarios: the driver must give the injured hiker a lift but it would not be wrong to ignore the request for a donation.

But Greene, like Peter Singer before him, disagrees. There is no ‘good reason’ why ‘up close and personal’ moral problems are more important than impersonal ones:

Why is there this difference? About thirty years ago, the utilitarian philosopher Singer argued that there is no real moral difference between cases such as these two, and that we in the affluent world ought to be giving far more than we do to help the world’s most unfortunate people. (Singer currently gives about 20% of his annual income to charity.) Many people, when confronted with this issue, assume or insist that there must be ‘some good reason’…

…But maybe this pair of moral intuitions has nothing to do with ‘some good reason’ and everything to do with the way our brains happen to be built. To explore this and related issues, my colleagues and I conducted a brain imaging study in which participants responded to the above moral dilemmas as well as many others. The dilemma with the bleeding hiker is a ‘personal’ moral dilemma… The donation dilemma is an ‘impersonal’ moral dilemma, …

To make a long story short, we found that judgements in response to ‘personal’ moral dilemmas, compared with ‘impersonal’ ones, involved greater activity in brain areas that are associated with emotion and social cognition. Why should this be? An evolutionary perspective is useful here. … our altruistic instincts will reflect the environment in which they evolved rather than our present environment

  • …our ancestors did not evolve in an environment in which total strangers on opposite sides of the world could save each others’ lives

  • …our ancestors did evolve in an environment in which individuals standing face-to-face could save each others’ lives

What does this mean for ethics? Again, we are tempted to assume that there must be ‘some good reason’ … but the evolutionary account … suggests otherwise: we ignore the plight of the world’s poorest people not because we implicitly appreciate the nuanced structure of moral obligation, but because, the way our brains are wired up, needy people who are ‘up close and personal’ push our emotional buttons…

This is just a hypothesis. I do not wish to pretend that … science has all the moral answers. Nor do I believe that normative ethics is on its way to becoming a branch of the natural sciences … Instead, I think that we can respect the distinction between how things are and how things ought to be while acknowledging, as the preceding discussion illustrates, that scientific facts have the potential to influence our moral thinking in a deep way.

Greene is a moral relativist:

According to ‘moral realism’ there are genuine moral facts, whereas moral anti-realists or moral subjectivists maintain that there are no such facts. Although this debate is unlikely to be resolved any time soon, I believe that neuroscience and related disciplines have the potential to shed light on these matters by helping us to understand our common-sense conceptions of morality. I begin with the assumption (lamentably, not well tested) that many people, probably most people, are moral realists.

He likens ethical relativism to aesthetic relativism:

Baboons, on the other hand, probably find each other very sexy and take very little interest in the likes of Tom Cruise and Nicole Kidman. Who is right, us or the baboons?

…recent evidence from neuroscience and neighbouring disciplines indicates that moral judgement is often an intuitive, emotional matter.

We have here the beginnings of a debunking explanation of moral realism: we believe in moral realism because moral experience has a perceptual phenomenology, and moral experience has a perceptual phenomenology because natural selection has outfitted us with mechanisms for making intuitive, emotion-based moral judgements, much as it has outfitted us with mechanisms for making intuitive, emotion-based judgements about who among us are the most suitable mates.

Therefore, we can understand our inclination towards moral realism not as an insight into the nature of moral truth, but as a by-product of the efficient cognitive processes we use to make moral decisions.

Others might wonder how one can speak on behalf of moral anti-realism after sketching an argument in favour of increasing aid to the poor. (Brief reply: giving up on moral realism does not mean giving up on moral values. It is one thing to care about the plight of the poor, and another to think that one’s caring is objectively correct.)

But Greene is not arguing against moral realism. He is arguing for a maturation of morality:

However, the point of this brief sketch is not to make a conclusive scientific case against moral realism, but simply to explain how neuroscientific evidence, and scientific evidence more broadly, have the potential to influence the way we understand morality. … Understanding where our moral instincts come from and how they work can… lead us to doubt that our moral convictions stem from perceptions of moral truth… Some might worry that this conclusion, if true, would be very unfortunate:

  • First, it is important to bear in mind that a conclusion’s being unfortunate does not make it false.

  • Second, this conclusion might not be unfortunate at all. A world full of people who regard their moral convictions as reflections of personal values rather than reflections of ‘the objective moral truth’ might be a happier and more peaceful place than the world we currently inhabit.

The maturation of human morality will, in many ways, resemble the maturation of an individual person. As we come to understand ourselves better — who we are, and why we are the way we are — we will inevitably change ourselves in the process.

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s