From Neural ‘Is’ to Moral ‘Ought’

This talk takes its inspiration from Joshua Greene’s ‘From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?’

He says:

“Many moral philosophers regard scientific research as irrelevant to their work because science deals with what is the case, whereas ethics deals with what ought to be.”

but Greene (director of Harvard’s ‘Moral Cognition Lab’) continues:

“I maintain that neuroscience can have profound ethical implications by providing us with information that will prompt us to re-evaluate our moral values and our conceptions of morality.”

So: what are those profound implications?

In this talk I explore various ideas to try to present a neuroscientific perspective on morality.

Is to Moral Ought

We’ll start with some brief background to ethics (the ‘moral ought’ of the title) and then the ‘is to ought’ part. ‘Normative ethics’ is about the right (and wrong) way people should act in contrast to ‘descriptive ethics’ which, not surprisingly, just describes various ethical theories.

There are 3 major moral theories within normative ethics:

  • Deontology which emphasizes duties and the adherence to rules and is frequently associated with Immanuel Kant,
  • Consequentialism which emphasizes the consequences of an action in determining what should be done and is frequently associated with Jeremy Bentham’s and John Stuart Mill’s Utilitarianism that aims for “the greatest happiness of the greatest number”,
  • and the less familiar Virtue Ethics which emphasizes the goodness (good character) of the agent performing the action rather than the act. Virtue ethics is frequently associated with Aristotle but various other philosophers have produces lists of virtues that define a good person. For example, Plato defined the ‘4 cardinal virtues’ (Prudence, Justice, Courage and Temperance) and Aquinas defined the ‘3 theological virtues (Faith, Hope and Charity). Lawrence Kohlberg (who we will hear of later on) criticised Virtue Ethics in that everyone can have their own ‘bag of virtues’ but there is no guidance of how to choose those ethics.

Whilst it is true that:

 “… science deals with what is the case, whereas ethics deals with what ought to be.”

… it is technically possible to get from an ‘is’ to an ‘ought’. We might assert a fact that ‘murder decreases happiness’ (an ‘is’), perhaps asserted because of a neuroscientific way of measuring happiness. But it would not be logically true to derive the imperative ‘do not murder’ (an ‘ought’) from this. However, if predicated by the goal of ‘maximization of happiness’, it is true:

if goal then { if fact then imperative }

‘if our goal is to achieve the maximum happiness and murder decreases happiness then do not murder’

But this just shifts the problem one step back from specifics to wider philosophical questions. The issue is then:

  • What should our goal be?
  • What is the purpose of morality?
  • What is the purpose of life, mankind and the universe?

And there is the issue:

  • Who gets to decide?

The Cognitive Essence of Morality

For me, if I get to decide the purpose of morality, I think it comes down to this – everyone can decide what their own goals are, and the essence of morality is then:

The (deliberative) balancing the wants (goals) of oneself with those of (sentient) others.

It is about self-regulation.

Immediately, this casts the problem into cognitive terms:

  1. In order to balance goals, we need a faculty of reason.
  2. In order to understand the concepts of ‘self’ and ‘others’ we need a ‘theory of mind’.
  3. We feel that we can choose our wants but they are ultimately physiological i.e. neurological.
  4. (The issue of identifying sentience i.e. consciousness is not considered here.)

To be moral requires intelligence, a ‘theory of mind’ and maybe other things.

Iterated Knowings

What is ‘theory of mind’?

It is an ability to understand that others can know things differently from oneself. We must understand this if we are to balance their wants against ours.

The Sally Anne test

The classic test for a theory of mind is the ‘Sally Anne Test’ which presents a story:

  • Sally has a marble which she puts her marble into a basket. She then goes out for a walk. During this time, Anne takes the marble from the basket and puts in to a box. Sally then comes back.

The question is then:

Where will Sally look for her marble?

If we think Sally will look for her marble in the box then we have no theory of mind.

This theory fits neatly into a scale of ‘Iterated Knowings’ set our originally by James Cargile in 1970 but prominently discussed by Daniel Dennett and Robin Dunbar.

The scale starts at the zero-eth level: some information (‘x’). Information relates something to something else. If ‘some input’, then ‘some output’. Information can be encapsulated by rules.

At the first level, we have beliefs (‘I know x’) which we recognise can be different from reality (‘x’).

At the second level, we understand theory of mind: ‘I know you know x’. Knowing it is possible for others to not know things, it is possible to deceive them: ‘I know that Sally will not know the marble is in the box’.

At the third level, there is ‘communicative intent’: ‘I know you know I know x’. I can communicate information to you and know that you have received it. I am able to understand that you can understand that you have been deceived by me – I can understand reputation.

At the fourth level, it is possible to understand roles and narrative: ‘I know you know I know you know x’ where ‘you’ are an author, for example. In the 1996 film production of ‘Hamlet’, Kenneth Branagh’s Hamlet kills Richard Briers’s Polonius. A failure to understand roles would mean that we would think that Branagh has killed Briers.

At the fifth level, there is an awareness of roles and narratives that are distinct from the role or narrative. There is an awareness that others have their own narratives that are different from one’s own, even though the experiences are similar – there can be other cultures, myths, religions and worldviews. Many adults do not attain this level.

At each level, there is an awareness of the phenomenon at the lower level that is distinct from the phenomenon itself. It is possible to understand sentences at seemingly higher levels, for example:

“I know that Shakespeare wants us to believe that Iago wants Othello to believe that Desdemona loves Cassio”

but this is still really only a fourth-level phenomenon – that of understanding roles.

These levels of iterated knowings are also referred to as orders of intentionality.

Cognitive Theories of Moral Development

In order to:

balance the wants of oneself with those of others

we need rational intelligence and a theory of mind as already stated. But we also need an ability to work out what the ‘other’ wants. Judging from appearance, this requires ‘social cognition’ – an ability to read faces and body language, to understand what the other is feeling.

But there is another ingredient required for us to actually act morally – for us to care about the other.

By my definition, a moral agent tries to understand what the other wants – tries to apply the ‘Platinum Rule’:

‘Do unto others as they would want to be done by’

as opposed to the more common baseline of moral behaviour, the ‘Golden Rule’:

‘Do unto others as you would want to be done by.’

Having said that care is required, it is possible to manage without it by upping the order of intentionality.

A third-order agent understands reputation. It may not care about the other but it (sociopathically) balances its wants against the other to maintain a reputation which helps itself in the long term.

It is also possible to manage without social cognition through communication. A third-order agent may not be able to understand what you want but it may be able to ask you.

And finally, it is also possible to manage without either social cognition or a caring nature – by relying on communication and reputation.

We have here the basis of the theory of moral development in which there is increasing:

  • intelligence
  • level of intentionality
  • social cognition.
  • and care

and in which we are better with more of each characteristic. We could say that these are the cognitive moral virtues: intelligence, intentionality, social cognition and care!

Note that fifth-order intentionality is a level which many adults do not attain. All too often, moral conflict arises not because the others’ opinion differs from one’s own but because of an inability to understand that the other has a different worldview into which they fit knowledge. As Jacques Rancière has said:

“Disagreement is not the conflict between one who says white and another who says black. Rather, it is the conflict between one who says white and another who also says white but does not understand the same thing by it.”

A rather more famous theory of moral development based upon a theory of cognitive development is that of Lawrence Kohlberg’s, based upon Jean Piaget’s. It too has a 6-point scale, with the sixth being one which many do not attain:

  1. Infantile obedience: ‘How can I avoid punishment?’
  2. Childish self-interest: ‘What’s in it for me?’
  3. Adolescent group conformity (norms)
  4. Adult conformity to law and order
  5. Social contract / human rights
  6. Universal ethical principles / conscience

I will say no more about this other than to point out some similarity between my ‘Iterated Knowings’ theory and Kohlberg’s: the former’s characteristics of rules, deception, reputation and roles map approximately onto Kohlberg’s first 4 levels.

Up Close and Personal

Returning to Joshua Greene’s ‘From neural ‘is’ to moral ‘ought’’ paper, a significant part is devoted to two scenarios considered by Peter Unger:


You receive a letter asking for a donation of $200 from an international aid charity in order to save a number of lives. Should you make this donation?

Joshua Greene: ‘From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?’ – Nature reviews Neuroscience 4(10) pp.846-9 (2003)

The aid agency letter


You are driving in your car when you see a hitchhiker by the roadside bleeding badly. Should you take him to hospital even though this means his blood will ruin the leather upholstery of your car which will cost $200 to repair?

Joshua Greene: ‘From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?’ – Nature reviews Neuroscience 4(10) pp.846-9 (2003)

Should you take the injured hitchhiker to hospital?

The vast majority of us would not look badly upon anyone who did not donate the $200 but would consider the person who left the hitchhiker behind to die to be a moral monster.

But given $200 and a choice between the two scenarios, a Utilitarian should help the far-flung family rather than the hitch-hiker.

Greene says that we think there is

 ‘some good reason’

why our moral intuitions favour action when the choice is

‘up close and personal’

rather than far removed. He points out that the moral philosopher Peter Singer  would maintain that there is simply no good reason why we should.

I have proposed social cognition and caring for others as some of the essential characteristics of morality. These suggest our preference for the ‘up close and personal’. We care because we see.

I speculate that our caring stems from our need to identify between what is ourselves and what is not. In the rubber hand illusion, our eyes deceive us into thinking a rubber hand is actually our hand; momentarily we feel pain when the hand is hit before we work out that our sense of touch is not agreeing with our eyes. We unconsciously mimic others – when seeing someone with crossed arms, we may cross our own to reduce the discrepancy between our sense of proprioception and what we see. This is a weak connection (yawn contagion is much stronger – we cannot help ourselves). This makes a connection between seeing others in pain and having a deep sense of where it would hurt on ourselves. Again, we wince at the sight of others being hurt but this soon disappears as the recognition that ‘it is not me’ takes over. But at least there is this initial feeling of the pain at the sight of others in pain – the origins of empathy. (Some people claim  they literally feel the pain of others – that this sense does not quickly dissipate. This condition is called ‘mirror-touch synaesthesia’.)

Oxytocin and Vasopressin

Pair-bonded prairie voles

So I have provided a tentative a psychology story of the origins of care. But what does neuroscience tells us about this? In her 2011 book ‘Braintrust’ (sub-titled ‘What neuroscience tells us about morality’), Patricia Smith Churchland highlights some research in behavioral neurobiology into the very different behaviour between two very similar creatures. Prairie voles pair-bond for life whereas Montane voles are solitary. (The most prominent researchers on this topic are Thomas Insel (1992-), Sue Carter (1993-), Zuoxin Wang (1996-) and Larry Young (1999-).)

One physical difference is in two closely-located parts of the brain, the ventral pallidum  and the nucleus accumbens.

Compared with montane voles, prairie voles have much higher densities of neuromodulator receptors for Oxytocin and Vasopressin in these areas.

Larry Young

The Prairie vole brain. NAcc: Nucleus Accumbens, VP: Ventral Pallidum, PFC: Pre-Frontal Cortex, OB: Olfactory Bulb

What does this ‘higher density of neurotransmitters receptors’ mean? Well, neuromodulators are molecules that bind onto receptors on a neuron and control the firing of that neuron. A larger number of receptors on neurons for a particular neurotransmitter will increase the chance of that neuron firing when in the presence of such neurotransmitters. But a higher number of neurotransmitters will achieve the same result.

The most effective way of getting extra Oxytocin into the brain is via a nasal spray. Conversely, if an antagonistic drug is sprayed instead, these molecules with lock onto the receptors but they are the ‘wrong keys’ – the do not release proteins within the neuron that modulate the firing of the neuron. This effectively reduces the number of receptors. Put very simply, by increasing or decreasing the effects of these neuromodulators, researchers have found they can make Prairie voles behave more like Montane voles and vice versa.

This is an extremely simplistic view; the qualifying details do not matter here. The point is that we can experimentally control behaviour associated with these neurotransmitters – which is?…

Oxytocin and Vasopressin are primarily associated with reproduction in mammals including arousal, contractions and lactation. The ‘cousins’ of Oxytocin and Vasopressin have performed equivalent functions in other creatures for hundreds of millions of years.

From this reproduction starting point, these neurotransmitters have evolved to control maternal care for offspring, pair-bonding and allo-parenting. Allo-parenting is maternal care for young that is not by its parents, typically the ‘aunties’ of orphans. There is not any (magical) genetic mechanism for allo-parenting. It is just a result of seeing young physically close by needing care – from them being ‘up close and personal’.

And from human tests, it has been shown that they improve social cognition (at the expense of other learning) – the memory of faces, the recognition of fear and the establishment of empathy and trust.

This improved social cognition has led to interest from the autism community. Autism is sometimes thought of as lacking a ‘theory of mind’ but this is extreme. It is better characterized as having impaired social cognition. Tests with Oxytocin on autistic people show an improvement in eye gaze and the interpretation of emotions and a reduction in repetitive behaviour.

Oxytocin has also been connected with generosity. In the ‘Ultimatum game’ psychological test, the subject of the experiment proposes a split of money potentially given to them with another. The other person decides whether to accept the deal or to punish unfair offers so that neither party get anything; deals generally get accepted where the subject offers more than 30% of the stake. Oxytocin nasal sprays increases the proportion offered.

This all sounds fantastic. We just need everyone to spray some Oxytocin up our nostrils every morning and we will become more caring and considerate of others.

Oxytocin molecular structure

Paul Zak, an early researcher into the trust-related effects of Oxytocin, has zealously promoted the idea of the ‘Moral Molecule’ (as his book is called). But it has also been criticized as the ‘Hype Molecule’, particularly as more research was done which revealed some negative aspects of the neurotransmitter and its cousin.

Vasopressin has a conciliatory ‘tend-and-befriend’ effect on females but it will reduce ‘fight or flight’ anxiety in men and make them more aggressive in defence of the mate and of the young.

This may be the origin for behaviour that has been described as ethnocentric (even as ‘xenophobic’). For example, an early experiment based around Dutch, German and Muslim names found that German and Muslim names were less positively received when the Dutch subjects had been given Oxytocin.

Since we are considering morality as a balancing act, Oxytocin could be characterized as tilting the balance from ‘me’ more towards ‘you’ but also from ‘them’ towards ‘us’.

This and many practical matters means that we won’t be having our daily nasal sprays just yet.


Piff et al: 'Higher social class predicts increased unethical behavior'

Another BMW driver fails to stop for a pedestrian.

So far, I have characterized morality as balancing the wants of oneself with those of others and looked at how Oxytocin tips the balance towards others and can increase generosity.

Paul Piff (Berkeley) has devised various experiments to judge the generosity of the affluent. One test considered car type as an indicator of wealth and monitored which cars stopped at pedestrian crossings. High status cars were less likely to stop than other makes.

Another indicator of generosity is charitable giving. Various studies show that the most generous regions of a country are not the most affluent. In the USA, Utah and the Bible Belt stand out for higher generosity. Research indicates that it is not religious beliefs that are important here but regular attendance at services. These services involve moral sermons, donations and meeting familiar people.

Charitable giving in USA

Other factors that improve charitable giving include

  • being with a partner (‘pair-bonded’),
  • living in a rural community and
  • being less affluent (as suggested by Piff’s research).

There is a common theme here: being ‘up close and personal’ in meaningful relationships with others:

  • There is anonymity in an urban environment.
  • We are insulated from others in a car.

I have characterized morality as balancing the wants of oneself with those of others. Through psychology, we can understand why our preference for the ‘up close and personal’ has evolved. But this tells us nothing about how we should behave and this has nothing to do with neuroscience. But the neuroscience of Oxytocin & Vasopressin is one avenue towards a physical understanding of care and how it constrains us and how we might be able to control it in the future.

Reason vs Emotional Intuition

So, we emotionally feel a preference for the ‘up close and personal’ but our rational inclination is that this should not be. Just as there is the balance between self and others, there is a balance between emotion and reason – the two halves of psychology’s ‘dual process theory’. As described by Daniel Kahneman  in ‘Thinking, fast and slow’, ‘System 1’ is the fast, unconscious, emotional lower level and ‘System 2’ is the slower, conscious, reasoning higher level.

This split between rational and emotional decision-making corroborates well with Joshua Greene’s experiments in which his subjects answered trolleyology questions whilst in an fMRI scanner. Making decisions quickly was correlated with activity in the Amygdala and the Ventro-Medial Pre-Frontal Cortex (VM-PFC) whereas questions that caused longer deliberation was correlated with activity in the Dorso-lateral Pre-Frontal Cortex (DL-PFC). Both the Amygdala and the VM-PFC are associated with social decision-making and the regulation of emotion. In contrast, the DL-PFC is associated with ‘executive functions’, planning and abstract reasoning. We can say that the former regions are associated with ‘now’ and the latter region is associated with ‘later’.

The classic (Benthamite) form of Utilitarianism is ‘Act Utilitarianism’ in which an individual is supposed to determine the act which leads to the ‘the greatest happiness of the greatest number’. Such a determination is of course impossible but even practical deliberation to produce a reasonably good guess can often be too slow.

This has led to the ‘Rule Utilitarian’ approach of ‘pre-calculating’ the best response to typical situations to form rules. Then it is just a case of selecting the most applicable rule in a moral situation and applying that rule. That allows quite fast responses but these are often poor responses in retrospect.

Now, R. M. Hare proposed a ‘Two-Level Utilitarianism’ which is a synthesis of both Act- and Rule- Utilitarianism: apply the ‘intuitive’ rules but in the infrequent cases when there is a reduced confidence in the appropriate rules (such as more than one rule seeming to apply and those rules are in conflict), move on to ‘critical’ deliberation of the best action.

This looks a lot like ‘dual process theory’!

The Predictive Mind

We have a reasonable understanding of what goes on in the brain at the very low level of neurons, and we know what it is like at a very high level in the brain because we experience it from the inside every single day. But how we get from the small scale to the large scale is a rather difficult proposition!

‘Dual process theory’ is a crude but useful model upon which we can build psychological explanations but we now have a very promising theory of the brain that I have frequently mentioned elsewhere. Its most complete formulation is Karl Friston’s strangely-named ‘Variational Free Energy’ theory from as recently as 2005 but its pedigree can be traced back through Richard Gregory, William James to Hermann von Helmholtz in 1866, before the foundation of psychology as a discipline.

For the context here, I will not go over the details of this theory but the most basic behaviour of the brain is as a ‘hierarchy of predictors’, my preferred term for the theory that Jacob Hohwy calls ‘the Predictive Mind’, Andy Clark calls ‘predictive processing’ and yet others call ‘the Bayesian Brain’. All levels concurrently try to predict what is happening at the level below and provide prediction errors upwards on its confidence about its predictions. We then view the brain as multiple-level (more than 2) with lower levels dealing with the fast ‘small scale’ moving upwards to longer-term ‘larger scale’ levels. Psychology’s conceptual Dual Process theory becomes a subset of neuroscience’s physically-based Predictive Mind theory.

Felleman and Van Essen’s famous ‘wiring diagram’, showing the hierarchical organization from low levels (bottom) up to high levels (top)

This can inspire us to imagine a ‘multi-level Utilitarian’ moral theory which is superior to Hare’s ‘2-level Utilitarianism’. Noting that the ‘hierarchy of predictors’ operates:

  • continuously,
  • concurrently, and
  • dynamically

…we can produce a better moral theory…

Moral theories generally consider how to make a single decision based upon a particular moral situation, without revisiting it later.

We deal with the easy moral issues quickly, going back to the more complex that require more deliberation. This better consideration (prediction) of the consequences of possible actions may also be influenced by a change in circumstance since previously considered. And this change may be as a result of our (lower-level) actions previously made.

Eventually, the window of possible action upon a moral problem will pass and we can return to the ‘larger-scale’ problems which still linger. (When we have solved the injustices of inequality, poverty and violence in the Middle East, and have no more immediate problems to deliberate over, we can take a holiday.)

It automatically and dynamically determines the appropriate level of consideration for every problem we encounter.

I think this is a sensible moral theory. It is an intelligent theory. This is true almost by definition, because this Predictive Mind mechanism is how evolution has produced intelligence – an embodied general intelligence acting in a changing environment.

Georgia State University


I somewhat provocatively point out an irony that:

  • A moral philosopher sits in his armchair, proudly proposing a moral theory that is detached from the world of ‘is’.
  • Inside his head is a bunch of neurons wired together in a particular way to produce a particular way of thinking.
  • But his moral theory is an inferior description of the way his brain thinks!

So we end up with a cognitive theory in which moral problem solving isn’t really any different from any other type of problem solving! This is an Ethical Naturalist point of view.

From Dualism to Physicalism

For ordinary people of our grandparents’ generation, the dominant philosophical belief was of the separation of mind and matter. We had free will – the mind was free to make choices, unconstrained by the physical world.

In contrast, our grandchildrens’ generation will have grown up in an environment where the idea of the brain defining behaviour within what is essentially a deterministic world is commonplace. The concept of ‘free will’ is unlikely to survive this transition of worldviews intact and unmodified.

Now, there is no single fact of neuroscience that makes any Dualist suddenly switch over to being a Physicalist. People don’t change worldviews just like that. But the accumulation of coherent neuroscientific information over many years does cause a shift. As Greene says

“Neuroscience makes it even harder to be a dualist”

So, though we can always invoke the is/ought distinction to ensure that neuroscience and morality are disconnected, its influence on our metaphysics indirectly affects our concepts of morality.

With a Dualist worldview, we can say that if it is wrong for person A to do something in some precise situation, then it is also be wrong for person B to do that in that same precise situation. A and B can be substituted. It is the act that is moral.

However, with a Physicalist worldview, we have to accept that the physical state of an agent’s brain plays a part.

Psychology Fun!

Trajectory of the tamping iron through Phineas Gage’s head

Consider the two classic case studies of Phineas Gage and Charles Whitman:

  • Whilst working on the railroads in 1848, an explosion blew an iron rod straight through Phineas Gage’s head, up under a cheekbone and out through his forehead, leaving a gaping hole in his brain. He miraculously survived but his personality was changed from that of a responsible foreman beforehand to an irreverent, drunken brawler.
  • Charles Whitman personally fought his “unusual and irrational thoughts” and had sought help from doctors to no avail. Eventually he could hold them back no more whereupon he went on a killing spree killing 16. Beforehand, he had written “After my death I wish that an autopsy would be performed on me to see if there is any physical disorder.” The autopsy revealed a brain tumour.

It is not surprising to us that substantial changes to the physical brain cause it to behave substantially differently.

We can no longer say that it is equally blameworthy for persons A and B to do something in exactly the same situation because their brains are different.

Were I to find myself standing on the observation deck of the University of Texas tower with a rifle in my hand, I would not start shooting people at random as Whitman did. A major reason for this is that I don’t have the brain tumour he had. But if I were to have a brain like Whitman’s, then I would behave as he did! In shifting towards a physicalist position, we must move from thinking of acts being good or bad towards thinking of actors (the brains thereof) being good or bad. We move from Deontology or Consequentialism towards Virtue Ethics.

There is the concept of ‘flourishing’ within Virtue Ethics. We try to ‘grow’ people so that they are habitually good and fulfil their potential. To do this, we must design our environment so that they ‘grow’ well.

And when we talk of ‘bad brains’, we don’t blame Whitman for his behaviour. In fact, we feel sorry for him. We might actively strive to avoid such brains (by providing an environment in which doctors take notice, or take brain scans, when people complain to them about uncontrollable urges, for example). ‘Blame’ and ‘retribution’ no longer make sense. As others have said:

  • ‘with determinism there is not blame, and, with not blame, there should be no retribution and punishment’ (Mike Gazzaniga)
  • ‘Blameworthiness should be removed from the legal argot’  (David Eagleman)
  • `We foresee, and recommend, a shift away from punishment aimed at retribution in favour of a more progressive, consequentialist approach to the criminal law’ (Joshua Greene and Jonathan Cohen)


I have defined the essence of morality as being the balancing the wants of oneself with those of others:

  • As well involving reason, this means getting into someone else’s mind (rather than just getting into their shoes). On a scale of ‘iterated knowings’, we need at least a ‘theory of mind’. I have set out a theory of the moral development of a person in which there is progression up the scale of iterated knowings up to having a desire and ability to understand another’s entire epistemological framework, which is something relatively few people reach.
  • Whilst we can act morally based on the selfish maintenance of reputationand a rather mechanical ability to communicate, it is better if we also have ‘social cognition’ (an ability to see how another feels and read what they want, more directly than verbal communication) and to actually care about the other.
  • The origins of both social cognition and care lie in our basic cognitive need to be able to distinguish between self and non-self. In doing this, we can unconsciously relate the feelings of others back onto ourselves when we seethem, allowing us to empathize with them.

We can make a link from the actions of the neurotransmitters Oxytocin & Vasopressin up through social cognition and empathy to the shifting of the balance towards others in being more considerate and generous to others. A common factor in this behaviour is proximity – an unconscious emotional preference for those we know and see around us. This provides us with ‘some good reason’ why biasing towards the ‘up close and personal’ feels intuitively right even though we logically think there should be no bias.

The moral philosopher R. M. Hare proposes a sensible balancing of intuition and logic. But this ‘dual process’ psychology type of moral theory is just an inferior form of the more general neuroscientific theory of the ‘predictive mind’, advocated by Karl Friston, Jacob Hohwy, Andy Clark and others. The latter inspires an improved moral theory that:

  • Generalizes to advocating more detailed slower deliberation for more complex moral dilemmas, rather than just offering a two-stop shop.
  • Relates moral thinking to generalintelligent thinking of an agent embodied within an environment. This is an ethical naturalist position: moral problem solving is not distinct from other types of problem solving.
  • Improves the theory in being dynamic. Moral decisions are not ‘fire and forget’. We should continue to deliberate on our more complex moral problems after we have made a decision and moved on to subsequent moral situations, particularly as circumstances change or we see the results of our actions.

So ‘is’ might inspire ‘ought’ but it still does not imply it. Not directly, anyway.

Neuroscientific knowledge pushes society further away from dualism, towards physicalism in which the moral actor is embedded within its own environment and hence physically determined in the same way. Our moral framework must then shift towards a Virtue Ethics position of trying to cultivate better moral actors rather than the Deontological or Consequentialist focus on correct moral acts.

This forces us to re-evaluate blame and praise, shifting us away from retribution. We must actively cultivate a society in which people can morally ‘flourish’.

Our new-found knowledge in neuroscience forces us recognize that our neural construction constrains but also increasingly allow us to overcome it – but at our peril.

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s