Some Good Reason

 

This is the 21st part of the ‘Neural Is to Moral Ought’ series of posts. The series’s title comes from Joshua Greene’s opinion-piece paper

‘From Neural Is To Moral Ought: What are the moral implications of neuroscientific moral psychology?’

Here, I pick through Greene’s paper, providing responses to extensive quotes of his which refer back to a considerable number of previous parts of the series. His paper divides into 3 sections which I will examine in turn:

  1. The ‘is’/‘ought’ distinction
  2. moral intuition
  3. moral realism vs relativism

 

The ‘Is’/‘Ought’ Distinction

The paper’s abstract is:

Many moral philosophers regard scientific research as irrelevant to their work because science deals with what is the case, whereas ethics deals with what ought to be. Some ethicists question this is/ought distinction, arguing that science and normative ethics are continuous and that ethics might someday be regarded as a natural social science. I agree with traditional ethicists that there is a sharp and crucial distinction between the ‘is’ of science and the ‘ought’ of ethics, but maintain nonetheless that science, and neuroscience in particular, can have profound ethical implications by providing us with information that will prompt us to re-evaluate our moral values and our conceptions of morality.

and the body of the paper then starts:

Many moral philosophers boast a well cultivated indifference to research in moral psychology. This is regrettable, but not entirely groundless. Philosophers have long recognized that facts concerning how people actually think or act do not imply facts about how people ought to think or act, at least not in any straightforward way. This principle is summarized by the Humean dictum that one can’t derive an ‘ought’ from an ‘is’. In a similar vein, moral philosophers since Moore have taken pains to avoid the ‘naturalistic fallacy’, the mistake of identifying that which is natural with that which is right or good (or, more broadly, the mistake of identifying moral properties with natural properties).

This naturalistic fallacy mistake was committed by the now-discredited ‘Social Darwinists’ that aimed to ground moral philosophy in evolutionary principles. But:

.. the idea that principles of natural science might provide a foundation for normative ethics has won renewed favour in recent years. Some friends of ‘naturalized ethics’ argue, contra Hume and Moore, that the doctrine of the naturalistic fallacy is itself a fallacy, and that facts about right and wrong are, in principle at least, as amenable to scientific discovery as any others.

Only to a certain extent, I would say. It is true that the ‘ought’ is not logically bound to the ‘is’. We are free to claim that anything ought to be done. But ‘ought’ is substantially restricted by ‘is’. Moral theories cannot require us to do things which are outside of our physical control. ‘This is how we ought to think’ is constrained by ‘This is how we think’. For Greene,

… I am sceptical of naturalized ethics for the usual Humean and Moorean reasons.

Continuing, with reference to William Casebeer’s opinion piece in the same journal issue:

in my opinion their theories do not adequately meet them. Casebeer, for example, examines recent work in neuroscientific moral psychology and finds that actual moral decision-making looks more like what Aristotle recommends and less like what Kant and Mill recommend. From this he concludes that the available neuroscientific evidence counts against the moral theories of Kant and Mill, and in favour of Aristotle’s. This strikes me as a non sequitur. How do we go from ‘This is how we think’ to ‘This is how we ought to think’? Kant argued that our actions should exhibit a kind of universalizability that is grounded in respect for other people as autonomous rational agents. Mill argued that we should act so as to produce the greatest sum of happiness. So long as people are capable of taking Kant’s or Mill’s advice, how does it follow from neuroscientific data — indeed, how could it follow from such data — that people ought to ignore Kant’s and Mill’s recommendations in favour of Aristotle’s? In other words, how does it follow from the proposition that Aristotelian moral thought is more natural than Kant’s or Mill’s that Aristotle’s is better?

The ‘Neural Is to Moral Ought’ series started with an examination of (Mill’s) Utilitarianism, (Kant’s) Deontological ethics and (Aristotelian) Virtue Ethics in turn. All three approaches have their merits and deficiencies. Of the three, I am disinclined towards the dogmatism of Deontological ethics and particularly inclined towards Virtue Ethics because of its accounting for moral growth. The latter is more ‘natural’ because it is in keeping with how our brains physical learn as opposed to being treated as idealized reasoners or rule-followers.

Whereas I am sceptical of attempts to derive moral principles from scientific facts, I agree with the proponents of naturalized ethics that scientific facts can have profound moral implications, and that moral philosophers have paid too little attention to relevant work in the natural sciences. My understanding of the relationship between science and normative ethics is, however, different from that of naturalized ethicists. Casebeer and others view science and normative ethics as continuous and are therefore interested in normative moral theories that resemble or are ‘consilient’ with theories of moral psychology. Their aim is to find theories of right and wrong that in some sense match natural human practice. By contrast, I view science as offering a ‘behind the scenes’ look at human morality. Just as a well-researched biography can, depending on what it reveals, boost or deflate one’s esteem for its subject, the scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it.

But this is too vague. It says virtually nothing. Greene suggests that something might be profound but provides no idea of how things might actually look ‘behind the scenes’.

Let’s take a step back to ask- what is the purpose of morality? Ethics is about determining how we ought to behave, but to answer that requires us to decide upon the purpose of human existence. Such metaphysical meaning has proved elusive except for religious communities. Without any divine purpose, we are left with deciding meaning for ourselves and the issue then is that our neighbour may find a different meaning which will then determine different behaviour. The conclusion is that the purpose of morality is the balancing of the wants of others against that of ourselves. But this requires us to consider:

  1. What do we want?
  2. How can we understand the wants of others?
  3. How can we cognitively decide?

All three considerations are ultimately grounded in the physics of our brains:

  1. We are free to want whatever we want, but we are all physically very similar so it should come as no surprise that we will have similar wants (food, water, shelter, companionship…).
  2. We need a ‘theory of mind’ (second-order intentionality) in order to understand that others may have wants of their own. We need an understanding of ‘reputation’ (third-order intentionality) to want to moderate our behaviour.
  3. We need a cognitive ability to deliberate in order to make moral choices (in short, to be able to make rational decisions).

(Even the religion opt-out eventually leads us back to the physical brain – how people learn, know and believe is rooted in the physical brain.)

In principle there is no connection between ‘is’ and ‘ought’ and a philosopher can propose any moral theory. But when they do, others provide counter-examples

which lead to prescribing absurd responses. All too often, the difficulty lies not in what should be done in practice but in trying to codify their moral theory and they end up modifying their theory rather than their action!

What if we try to combine the best elements of the three (Utilitarianism, Deontological ethics and Virtue Ethics) main moral theories in order to provide practical moral guidance? Such a synthesis was presented. Ignoring the details here, an extremely brief summary is:

  • We imagine the consequences of potential actions in terms of its effect on the collective well-being of all.
  • In the early stages of growth, we respond with the application of (learnt) simple rules.
  • The less clear-cut those rules are to the particular situation, the less confidence we have in them and we apply more conscious effort into assessing consequences.
  • This provides us with an ability to respond both to the ‘simple’ moral problems quickly and efficiently and to complex problems with considerable attention.
  • We gradually develop more subtle sub-rules that sit upon the basic rules and we learn to identify moral situations and then apply the rules and sub-rules with greater accuracy and speed. This is moral growth.

The resulting ‘mechanistic’ account of moral reasoning is remarkably similar to the ‘hierarchy of predictors’ (‘predictive brain’, ‘variational free energy’) theory of what the brain is doing generally. So, what the brain is doing when there is moral deliberation is basically the same as when there is non-moral deliberation. There is nothing particularly special about moral thinking.

 

Moral Intuition

Greene acknowledges the role of methods of determining judgements other than just ‘Pure Reason’:

There is a growing consensus that moral judgements are based largely on intuition — ‘gut feelings’ about what is right or wrong in particular cases. Sometimes these intuitions conflict, both within and between individuals. Are all moral intuitions equally worthy of our allegiance, or are some more reliable than others? Our answers to this question will probably be affected by an improved understanding of where our intuitions come from, both in terms of their proximate psychological/neural bases and their evolutionary histories.

He contrasts two moral dilemmas (both due to Peter Unger): Firstly, Case 1:

You are driving along a country road when you hear a plea for help coming from some roadside bushes. You pull over and encounter a man whose legs are covered with blood. The man explains that he has had an accident while hiking and asks you to take him to a nearby hospital. Your initial inclination is to help this man, who will probably lose his leg if he does not get to the hospital soon. However, if you give this man a lift, his blood will ruin the leather upholstery of your car. Is it appropriate for you to leave this man by the side of the road in order to preserve your leather upholstery? Most people say that it would be seriously wrong to abandon this man out of concern for one’s car seats.

And then Case 2:

You are at home one day when the mail arrives. You receive a letter from a reputable international aid organization. The letter asks you to make a donation of two hundred dollars to their organization. The letter explains that a two-hundred-dollar donation will allow this organization to provide needed medical attention to some poor people in another part of the world. Is it appropriate for you to not make a donation to this organization in order to save money? Most people say that it would not be wrong to refrain from making a donation in this case.

Now, most people think there is a difference between these scenarios:

  • the driver must give the injured hiker a lift, but
  • it would not be wrong to ignore the request for a donation.

In fact, we can imagine doing a Utilitarian calculation, trading off the benefits between the two situations, and concluding from that that it is more Utilitarian to donate the money it would cost to repair the leather upholstery to charity instead of helping the hiker. But we are then more likely to actually help the hiker anyway and refine the Utilitarian calculus somehow. We override our codified system because it feels like there is ‘some good reason’ why the decision is right. But Greene, like Peter Singer before him, thinks that, whatever that reason is, it is not a moral reason.

And yet this case and the previous one are similar. In both cases, one has the option to give someone much needed medical attention at a relatively modest financial cost. And yet, the person who fails to help in the first case is a moral monster, whereas the person who fails to help in the second case is morally unexceptional. Why is there this difference? About thirty years ago, the utilitarian philosopher Singer argued that there is no real moral difference between cases such as these two, and that we in the affluent world ought to be giving far more than we do to help the world’s most unfortunate people. (Singer currently gives about 20% of his annual income to charity.) Many people, when confronted with this issue, assume or insist that there must be ‘some good reason’ for why it is alright to ignore the severe needs of unfortunate people in far off countries, but deeply wrong to ignore the needs of someone like the unfortunate hiker in the first story. (Indeed, you might be coming up with reasons of your own right now.) Maybe there is ‘some good reason’ for why it is okay to spend money on sushi and power windows while millions who could be saved die of hunger and treatable illnesses. But maybe this pair of moral intuitions has nothing to do with ‘some good reason’ and everything to do with the way our brains happen to be built.

Green identifies the difference as being between ‘personal’ and ‘impersonal’ situations:

The dilemma with the bleeding hiker is a ‘personal’ moral dilemma, in which the  moral violation in question occurs in an ‘up-close-and-personal’ manner. The donation dilemma is an ‘impersonal’ moral dilemma, in which the moral violation in question does not have this feature. To make a long story short, we found that judgements in response to ‘personal’ moral dilemmas, compared with ‘impersonal’ ones, involved greater activity in brain areas that are associated with emotion and social cognition. Why should this be? An evolutionary perspective is useful here. Over the last four decades, it has become clear that natural selection can favour altruistic instincts under the right conditions, and many believe that this is how human altruism came to be. If that is right, then our altruistic instincts will reflect the environment in which they evolved rather than our present environment. With this in mind, consider that our ancestors did not evolve in an environment in which total strangers on opposite sides of the world could save each others’ lives by making relatively modest material sacrifices. Consider also that our ancestors did evolve in an environment in which individuals standing face-to-face could save each others’ lives, sometimes only through considerable personal sacrifice. Given all of this, it makes sense that we would have evolved altruistic instincts that direct us to help others in dire need, but mostly when the ones in need are presented in an ‘up-close-and-personal’ way. What does this mean for ethics? Again, we are tempted to assume that there must be ‘some good reason’ why it is monstrous to ignore the needs of someone like the bleeding hiker, but perfectly acceptable to spend our money on unnecessary luxuries while millions starve and die of preventable diseases. Maybe there is ‘some good reason’ for this pair of attitudes, but the evolutionary account given above suggests otherwise: we ignore the plight of the world’s poorest people not because we implicitly appreciate the nuanced structure of moral obligation, but because, the way our brains are wired up, needy people who are ‘up close and personal’ push our emotional buttons, whereas those who are out of sight languish out of mind.

This is just a hypothesis. I do not wish to pretend that this case is closed or, more generally, that science has all the moral answers. Nor do I believe that normative ethics is on its way to becoming a branch of the natural sciences, with the ‘is’ of science and the ‘ought’ of morality gradually melding together. Instead, I think that we can respect the distinction between how things are and how things ought to be while acknowledging, as the preceding discussion illustrates, that scientific facts have the potential to influence our moral thinking in a deep way.

But again, this is all rather vague.

Relating this to what I have previously discussed…

  • The ‘hierarchy of predictors’ model describes the way in which many levels compete with one another to influence behaviour (spreading from reflex to rational, via sensorimotor, emotional, subconscious and conscious levels . Lower levels will dominate action in familiar moral situations. But in unfamiliar circumstances or when the problem consists of two familiar reactions with contradictory actions, lower levels will less confident about their response and control will effectively be passed upwards for (slower) rational judgement. In a decision between helping the bleeding hiker and donating to charity, rational deliberation gets shut out by the lower level emotional and intuitive response.
  • Patricia Churchland shows that our caring originates in our brain, such as in the way that the greater density of oxytocin receptors in the nucleus accumbens and a greater density of vasopressin receptors in the ventral pallidum (both nucleii are part of the basal ganglia at the base of the forebrain) makes the significant difference in behaviour between the otherwise-similar (monogamous) Prairie Vole and Montane Voles. The ‘up-close-and-personal’ proximity effect of alloparenting expands this beyond the family to the ‘In-Group’. But oxytocin is not a magic bullet. It improves empathy with the In-Group but it actually works against Out-Group members.

The physical construction of the brain seems to provide one ‘some good reason’ why immediate ‘up close and personal’ situations elicit a moral response in the way that slowly-rationalized situations do not. (A frequent rational response of worldwide charities to appeal to us is not by presenting facts about the suffering of many, many thousands but it is to present an image of a single individual suffering, furnishing them with a name and a story of misfortune – to make the problem ‘up-close-and-personal’.)

If we truly do want to have a morality that does not prioritize those ‘up close’, then we need to provide some compensation mechanisms to our decision making – consciously equalizing out our emotions. But our emotions can play an important positive role. Empathy is a very significant factor in creating habits that underpin the balancing of the wants of others against the wants of oneself. Yes, we must learn the virtue of balancing others against ourselves, but we must also learn the virtue of balancing reason against our emotions.

 

Moral Realism

Greene then shifts attention to Moral Realism:

According to ‘moral realism’ there are genuine moral facts, whereas moral anti-realists or moral subjectivists maintain that there are no such facts. Although this debate is unlikely to be resolved any time soon, I believe that neuroscience and related disciplines have the potential to shed light on these matters by helping us to understand our common-sense conceptions of morality. I begin with the assumption (lamentably, not well tested) that many people, probably most people, are moral realists. That is, they believe that some things really are right or wrong, independent of what any particular person or group thinks about it. For example, if you were to turn the corner and find a group of wayward youths torturing a stray cat, you might say to yourself something like, “That’s wrong!”, and in saying this you would mean not merely that you are opposed to such behaviour, or that some group to which you belong is opposed to it, but rather that such behaviour is wrong in and of itself, regardless of what anyone happens to think about it. In other words, you take it that there is a wrongness inherent in such acts that you can perceive, but that exists independently of your moral beliefs and values or those of any particular culture.

I think torturing cats is not just wrong but universally wrong. Universally wrong means that it is wrong in all societies. Across societies, we understand sufficiently the same about what ‘wrongness’ and ‘morality’ actually mean that, when presented with a clear (black and white) moral case, we can all agree on whether that case is right or wrong. It is not that there is some absolute truth of the matter, just that similar agents understanding of common concepts leads to common knowledge. Universally wrong is not the same as absolutely (‘real-ly’) wrong.

Surveying cultures around the world across all civilisations, we find that they have surprisingly similarly moralities. It is not that one society accepts stealing but not murder and another accepts murder but not stealing! The differences are predominantly down to how liberal or conservative a society is. Liberal societies have a shorter list of vices than conservative ones. For example, the way an individual dresses is seen as a matter of aesthetics or custom for liberal (e.g. U.S) societies but a matter of morality for conservative (e.g. Muslim) societies.

There are clear cases of what is right and wrong that apply across most if not all human civilizations. It is in the less clear-cut cases that they differ and hence moral problems arise.

This realist conception of morality contrasts with familiar anti-realist conceptions of beauty and other experiential qualities. When gazing upon a dazzling sunset, we might feel as if we are experiencing a beauty that is inherent in the evening sky, but many people acknowledge that such beauty, rather than being in the sky, is ultimately ‘in the eye of the beholder’. Likewise for matters of sexual attraction. You find your favourite movie star sexy, but take no such interest in baboons. Baboons, on the other hand, probably find each other very sexy and take very little interest in the likes of Tom Cruise and Nicole Kidman. Who is right, us or the baboons? Many of us would plausibly insist that there is simply no fact of the matter. Although sexiness might seem to be a mind-independent property of certain individuals, it is ultimately in the eye (that is, the mind) of the beholder.

I have previously looked at how aesthetics and moral knowledge are just particular forms of knowledge. Moral knowledge is neither uniquely nor totally separate from the physical world of what ‘is’. Aesthetics is the same; it is dependent on things like our (neural) ability to perceive and on our emotions (such as disgust).

The big meta-ethical question, then, might be posed as follows: are the moral truths to which we subscribe really full-blown truths, mind-independent facts about the nature of moral reality, or are they, like sexiness, in the mind of the beholder?

Elsewhere, I have examined how truth is ‘in the mind of the beholder’ – that knowledge (crudely ‘facts’) grows within our brains, building upon earlier ‘facts’ such that it both corresponds with our personal experience and coheres with what else we know. The apparent universality of ‘facts’ (including moral knowledge) arises because we grow up:

  • in the same (or very similar) environment as others, and
  • in a shared culture, meaning that we (more explicitly) learn the same as others.

For our ‘rational’ upper levels, our lower levels (including our emotional urges) are just part of the environment in which we grow up (a very immediate part, mind you).

One way to try to answer this question is to examine what is in the minds of the relevant beholders. Understanding how we make moral judgements might help us to determine whether our judgements are perceptions of external truths or projections of internal attitudes. More specifically, we might ask whether the appearance of moral truth can be explained in a way that does not require the reality of moral truth. As noted above, recent evidence from neuroscience and neighbouring disciplines indicates that moral judgement is often an intuitive, emotional matter. Although many moral judgements are difficult, much moral judgement is accomplished in an intuitive, effortless way.

In my worldview, the appearance of moral truth does not require the reality of moral truth!

With the ‘hierarchy of predictors’ model of the brain, it should be expected that moral judgements, like judgements of other forms of knowledge, are typically accomplished in an intuitive, effortless way – by the lower levels of the hierarchy. It is what we do with the exceptional, difficult decisions that is interesting – those decisions that are propagated up to the higher levels that have our conscious attention.

We are limited by the specifics of our physiology and neurology associated with the instruments that our senses  (although we can now build external instruments to extend our senses). We cannot like or dislike what we cannot sense.

An interesting feature of many intuitive, effortless cognitive processes is that they are accompanied by a perceptual phenomenology. For example, humans can effortlessly determine whether a given face is male or female without any knowledge of how such judgements are made. When you look at someone, you have no experience of working out whether that person is male or female. You just see that person’s maleness or femaleness. By contrast, you do not look at a star in the sky and see that it is receding. One can imagine creatures that automatically process spectroscopic redshifts, but as humans we do not.

All of this makes sense from an evolutionary point of view. We have evolved mechanisms for making quick, emotion-based social judgements, for ‘seeing’ rightness and wrongness, because our intensely social lives favour such capacities, but there was little selective pressure on our ancestors to know about the movements of distant stars. We have here the beginnings of a debunking explanation of moral realism: we believe in moral realism because moral experience has a perceptual phenomenology, and moral experience has a perceptual phenomenology because natural selection has outfitted us with mechanisms for making intuitive, emotion-based moral judgements, much as it has outfitted us with mechanisms for making intuitive, emotion-based judgements about who among us are the most suitable mates.

Or much as natural selection has outfitted us with mechanisms for making intuitive, emotion-based judgements about anything.

Therefore, we can understand our inclination towards moral realism not as an insight into the nature of moral truth, but as a by-product of the efficient cognitive processes we use to make moral decisions. According to this view, moral realism is akin to naive realism about sexiness, like making the understandable mistake of thinking that Tom Cruise is objectively sexier than his baboon counterparts.

Both intuition and emotion play an important part in moral deliberation just as it does in other forms of deliberation.

Greene has just been making vague comments so far. But then he makes a comment that is acute:

Others might wonder how one can speak on behalf of moral anti-realism after sketching an argument in favour of increasing aid to the poor

to which his reply is

giving up on moral realism does not mean giving up on moral values. It is one thing to care about the plight of the poor, and another to think that one’s caring is objectively correct.

I have emphasized the importance of caring in creating a moral society and looked at its biological foundations. It is largely true that we act morally because we care.

… Understanding where our moral instincts come from and how they work can, I argue, lead us to doubt that our moral convictions stem from perceptions of moral truth rather than projections of moral attitudes.

A case has been presented of how our neurology promotes caring to extend, via oxytocin, alloparenting, group behaviour and institutional trust, to very large societies in which we care for complete strangers. This is how our moral convictions arise. Our morals are contingent on culture and environment and not on absolute moral truths. Our moral instincts that make us to help the injured hitchhiker (emotionally, quickly) and ignore the appeal through the letterbox (deliberatively, slowing, consciously) are built upon the ‘up close and personal’ origins of our caring. It could not be otherwise. Our logical/rational/deliberative higher levels of cognition are built (evolved) upon lower, quicker instinctive levels.

Some might worry that this conclusion, if true, would be very unfortunate.

First, it is important to bear in mind that a conclusion’s being unfortunate does not make it false.

This is true for moral determinism as well as moral instincts (our instincts are that we are free but the scientific evidence points towards determinism). The unfortunate conclusion of determinism all too often made is that the lack of free will and therefore cannot punish transgressors for actions they could not have avoided. And hence moral order dissolves.

Second, this conclusion might not be unfortunate at all.

I have argued elsewhere that we might not have ‘free will’ as conventionally understood but that will still have freedom and can still be held responsible. The moral order can be maintained. But furthermore, recognizing that some individuals do not have the control they are traditionally purported to have, we will be less retributive and we will be more prepared to intervene in order to design a society that further improves well-being (yes, in a scientific way).

 

This entry was posted in Uncategorized and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a comment