This ‘Ethics 101’ introduction is the first part of the ‘From Neural Is to Moral Ought’ series of talks, which lays some of the foundations for subsequent parts:
- The Is-Ought problem,
- Utilitarianism as an example moral system, and some of the problems with this.
- Utilitarianism as an optimization problem.
The title for this talk comes from Joshua Greene’s paper – ‘From Neural Is to Moral Ought’ makes the connection between what is inside our heads and how we should behave. And there are others, most prominently Patricia Churchland and Sam Harris, that are also writing about this relationship.
In asking specifically about how neuroscientific knowledge affects our ethics, we cannot avoid the philosophical problem of how any knowledge has an affect on our ethics. I start off by looking at this before then presenting an ethical framework based on our understanding of how the brain works.
With increased knowledge of how our brains do what they do, we can wonder about consciousness and concern ourselves about living in a deterministic world devoid of free will. Here I largely ignore these topics (I have started dealing with them elsewhere in ‘Could Androids Dream of Electric Sheep?’ and ‘Free Will/Free Won’t’). But I finish the talk by looking briefly at some practical implications of applying our new-found knowledge on society.
2: Is and Ought
In its most abstract terms, that branch of philosophy called ethics is about the rights and wrongs of the actions of agents within a shared environment. In more everyday terms, it is (normally) about how people should behave.
Conventionally, there are four major sub-disciplines within ethics:
- Descriptive ethics: what do people think is right (and wrong)?
- Normative ethics: what is the right (and wrong) way for people to act?
- Meta-ethics: what does right (and wrong) mean?
- Applied ethics: how do we apply the above to specific problems?
A basic starting point within meta-ethics is the ‘is-ought problem’ – David Hume’s identification of the difficulty in getting from descriptive statements about the world to normative, moral statements of how people should behave. This severing of the connection between ‘is’ and ‘ought’ is called ‘Hume’s Guillotine’ and is commonly summarized as:
“You can’t derive an ‘ought’ from an ‘is’.”
Related to this is G. E. Moore’s ‘naturalistic fallacy’ –it is a fallacy that morality should follow nature. This is often used to counter evolution-infused claims. An extreme example: just because the female praying mantis devours her mate during intercourse, it doesn’t mean it’s OK for you to! (There is also its opposite, the ‘moralistic fallacy’ – it is a fallacy that nature should follow morality, such as trying to provide a moral explanation for sexual cannibalism.)
However, in everyday life, we reason from ‘is’ to ‘ought’ all the time. For example, we might make a statement:
“I have an exam tomorrow therefore I ought to study tonight.”
The ‘is’ statement (having an exam) and the ‘ought’ statement (studying tonight) are explicit. But there are various implied statements in this:
- goal/purpose/value: ‘I want to pass the exam’
- predictions: ‘studying tonight will improve my chances of passing the exam’
- contingent states: ‘I have not already bribed my way to getting the answers in advance’
We can reformulate the going from an ‘is’ to an ‘ought’ so that the implicit goal/purpose/value is made explicit, without any problem. But those goals/purposes/values are conditional. For example:
- If I value passing exams then I ought to study tonight.
This is then a (Kantian) ‘hypothetical imperatives’ (as opposed to his more famous ‘categorical imperative’). Similarly, we can reformulate other ‘ought’ statements such as the following into hypothetical imperatives:
- A mobile phone ought to have a battery life of at 16 hours.
- A good terrorist ought to have no problem in being able to detonate improvised explosive devices (IEDs) remotely using a couple of stolen mobile phones.
- The undercover agent ought to have supplied the terrorists with detonators that were defective.
But this is just playing around with the words to clarify. The remaining essential problem is in the choice of those values /goals / purposes:
- Is it good to pass exams?
- Is it right for you to bribe someone to get the answers to the exam in advance?
- Is it right to steal a mobile phone?
- Should the terrorist be detonating the IED?
- Should the agent supply good equipment that will result in people getting killed, in order to build up trust with the terrorist organization?
This breaks down into two big issues:
- Where do we get to values from?
- How do we resolve the problem of when values conflict?
It is conflict that lies at the heart of moral problems:
- Conflict between agents.
- Within an agent, choosing between values: to go to the beach or to stay at home studying?
- Conflict that involves uncertainty: to do a moderate good or risk doing something that would be really good?
And the setting of values can be:
- Individually chosen: either principled (reasoned) or arbitrary,
- The individual can submit to a higher authority (Note: Arabic ‘Islam’ = ‘submission’), or
- The individual can be involved in making collective decisions with others e.g. democratically.
Ultimately, regardless of the case made and all its appeal to reason, Hume’s Guillotine can always be used to cut the connection from ‘is’ to ‘ought’. We can go to extremes in imagining a scenario that virtually everyone would say was bad, but still disconnect the ‘is’ from the ‘ought’: You may say I ought not trigger the serum release that will cause the agonizing death of millions but:
- Why is that a bad thing?
- Why should I do the good thing?
- After all, you can’t get from an ‘is’ to an ‘ought’!
This is an example of ‘Moral relativism’: we cannot condemn or criticize the behaviour of others because what people consider right and wrong is entirely shaped by the traditions, customs and practices of their culture (Note: the name ‘ethics’ is derived from the Greek word for custom/habit; Marvin Bower defined culture simply as ‘the way we do things around here’).
In contrast, ‘Moral realism’asserts that absolute, universal moral truths, irrespective of circumstances, exist. Realists that are ‘ethical naturalists’ assert that we can go from ‘is’ to ‘ought’ in more than the basic, goal-orientated way – that morality can be reduced to facts in the natural world i.e. science. Other (‘non-naturalist’) realists will maintain the distinction between ‘is’ and ‘ought’, with reason being the source of morality, irrespective of how the world is.
Going back to relativism, compare this situation of making a distinction between ‘is’ and ‘ought’ with that I made in the previous talk, ‘The Science Delusion’, in which I tried to maintain a distinction:
- between the metaphysical and the methodological,
- i.e., between ontology and epistemology,
- or simply: between ‘is’ and ‘knowing’
Making the distinction may be useful, but it wasn’t as clear cut as that. And similarly, it is not clear-cut between ‘is’ and ‘ought’. Just as a devout scepticism leaves us knowing nothing, in ethics, the is-ought distinction leaves us ‘oughting’ nothing. We can and do go from ‘is’ to ‘ought’ in practice – in more than the indirect goal-oriented mechanisms described above.
By the way, following on from the previous talk, I would try to maintain the ‘is’/’knowing’ distinction in that we don’t go directly from ‘is’ to ‘ought’ but from ‘is’ to ‘knowing’ (from the metaphysical to the epistemological) and then from ‘knowing’ to ‘ought’ (from the epistemological to the ethical) – from the chicken to the egg in the picture below.
3: The Moral Landscape
As previously stated, ‘normative ethics’ is that sub-division of ethics that asks what is the right (and wrong) way for people to act? There are three major moral theories within normative ethics:
- Deontology: this emphasizes the adherence to rules in determining the goodness of an action.
- Consequentialism: this emphasizes the consequences of an action in determining the goodness of the action.
- Virtue Ethics: this emphasizes the goodness (good character) of the agent performing the action (the agent’s goodness).
Consequentialism is often summarized by the saying:
“the ends justifies the means”.
For the dominant Consequentialist theory, Utilitarianism, that ‘ends’ is happiness. Jeremy Bentham, the founder of Utilitarianism famously talked of:
“the greatest happiness of the greatest number that is the measure of right and wrong”.
Bentham created a ‘Felicific Calculus’ as a conceptual way of quantifying this happiness. In somewhat modified form, we might express this as:
U is the sum of the net happiness of all Ptotal people on the planet (7.24 billion and counting) resulting from a particular course of action, dependent on Ii(t), the intensity of the pleasure/pain occurring for person i (positive for pleasure; negative for pain) from now until eternity (notwithstanding that this ignores the additional people born during this time).
Let us say I am accosted by a beggar asking for some spare change. I could plot a graph of U versus money given. $1 might produce some upward movement of Ubeggar and the negative movement of Ume might be negligible, offset by my smug feeling of benevolence. This results in a net increase in total U. By this measure, it would be a ‘good’ thing. $2 might increase this further. At $1000, the beggar has gone off and mis-spent his gains and told his friends which has in turn contributed to a culture of dependency and I am no longer feeling so smug. U has turned negative.
We could bring in another axis – of money I additionally give to a local charity. $5 along this axis might produce a higher total U than $5 on the other axis. We would end up with a 3-D ‘landscape’.
Since there are generally many many more than 2 options available, we actually have a ridiculously high-dimensional graph but for the sake of simplicity and our ability to visualize, I will stick with 2 options and a 3-D graph.
As does Sam Harris in his book ‘The Moral Landscape’ that essentially has the very same framework:
“Throughout this book I make reference to a hypothetical space that I call “the moral landscape” – a space of real and potential outcomes whose peaks correspond to the heights of potential well-being and whose valleys represent the deepest possible suffering. Different ways of thinking and behaving – different cultural practices, ethical codes, modes of government, etc. – will translate into movements across the landscape and therefore, into different degrees of human flourishing.”
Note that this is all very conceptual. It completely ignores our practical inability to predict consequences to ourselves in the short term let alone all others for all time. And it completely ignores what those functions Ii, Pi and Dt are – or rather, who determines them, such as:
- an ‘ideal observer’
- the state, or
- some aggregration of everyone
Utilitarians want the functions to measure happiness. Other consequentialists agree that it is the consequences that matter in morality but they want to optimize something different. For example State Consequentialists want to maximize the well-being of the state, not of individuals within the state.
The point I am wanting to make here is that, regardless of whatever it is we’re wanting to measure, we have an optimization problem. We don’t necessarily need to find the maximum across the entire space but we want to find places higher than where we are. We want to find better places in the landscape. Note: here I talk of a ‘maximization problem’ – that of maximizing utility, but the problem could be inverted to be dealing with a minimization of pain (here, pleasure is whatever is wanted – and could of course be pain instead of pleasure if we were so inclined and it was up to us to define the utility function).
Finding ‘better’ places in a landscape might seem easy.
This is true if the landscape we are dealing with is simple –analogous to an island with a single hill on it. A simple ‘always go uphill’ strategy will be effective. But in the real world, the moral landscape can be highly complex – more like the fjords of Norway – and we need better strategies.
In practice, finding significantly better places in highly-dimensional spaces is extremely difficult. We cannot see where is better – it is a bit like having the landscape covered in a thick fog. We cannot survey the landscape with our eyes. The problem with following a simple tactic like ‘always go uphill’ is that we get trapped in a ‘local maxima’. We then don’t have a way of finding significantly better places to be. Sometimes you have to go downhill to reach higher. We might want to go to the beach today, but studying will get us out of our local maxima to a higher peak in good time.
4: The Absurdities of Utilitarianism
The formulaic definition of utilitarianism as the minimization of U…
…is, of course, absurdly precise in that we cannot determine good values for the inputs into the formula and, not least, there is our general inability to predict consequent states. But in committing to something so definite, it lays us open to the generation of absurd counter-examples that allow us to dig deeper about what we really think is right.
Here I look at a few counter-examples…
A: Maximizing Total Utility
With maximizing total utility, we can presume that there is some feedback to stabilize the population somewhere between 1 and a huge runaway population. Too high and people are competing for the world’s resources; too low and there are not enough people to sustain vibrant cultures.
But what is the right level? Consider the following…
There is currently a world population about 7.24 billion and rising. Sadly, a fair number of these are suffering from the effects of war, conflict and famine. But imagine that sometime after that, mankind suffers some human-induced apocalypse (be it nuclear, environmental or whatever) and the population plunges. But by the year 2525, the population has recovered to a stable and prosperous 6 million living – about the population and satisfaction levels of Denmark today (Denmark regularly comes out top or near the top of various world happiness surveys).
Which would actually be better – the situation now or that in the year 2525? The total utility now might be higher than the hypothetical scenario of 2525, just due to the sheer weight of numbers of the people alive today. But is that preferable? Maximizing the somewhat grandly-metaphysical ‘total conscious positive experience’ of the universe, continually adding additional lives of only barely positive net utility is what Derek Parfit has called ‘the repugnant conclusion’.
Maybe we should be maximizing average happiness rather than total happiness?
B: Maximizing Average Utility
Now consider that, in the year 2525, the leader of the mere 6 million global population announces, with no irony, that it is disgraceful that 50% of the mere 8 million population have below-average happiness! Thus, a program of (painless) euthanasia is introduced to remove the most miserable 1% of the population every year. Over the course of the following years, the average happiness per person is raised.
C: Transferring Utility: The Utility Monster
Robert Nozick has criticized Utilitarianism through his concept of the ‘utility monster’. In this, the population stays the same but one individual accumulates happiness at the expense of others, even though the total (and average) utility increases. The utility monster is able to derive greater happiness from the resources available hence, with maximization, it makes sense to transfer utility to him. For example imagine 10 people each with 1 ‘hedon’ of pleasure (a hedon is a fiction unit of currency for hedonists that subscribe to the felicific calculus). But one of these is the Utility Monster; for every 1 hedon taken from another, the Utility Monster gains 2 hedons. Thus, maximum utility is achieved when the Utility Monster has 19 hedons and the other 9 people have nothing.
Whilst ‘the greatest happiness of the greatest number’ sounds egalitarian, this counter-example shows that it may not. It doesn’t seem right – it seems too capitalist! It seems to say that who owns the utility is relevant. In the utility equation,
there is an implied ‘+’ with the Σ operator. But it is not necessarily a simple scalar addition that we should be using. Imagine that 10 individuals A, B, C, D, E, F, G, H, J and U all start with one hedon (sum=10 but the ‘utility state’ is [1, 1, 1, 1, 1, 1, 1, 1, 1]). But by the end, U has all the utility (sum = 19 but the ‘utility state’ is [0, 0, 0, 0, 0, 0, 0, 0, 19]. Now clearly:
19 > 10
but it is not necessarily true that:
[0, 0, 0, 0, 0, 0, 0, 0, 0, 19] > [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]).
It might be that we could concoct a different sigma operator that was fairer in sharing utility (a minimum of all is one option – the utility of the group is gauged by the utility of the unhappiest member).
D: Owning Utility: The Hospital Donor
The classic problematic example of Utilitarianism concerns the unfortunate person who goes to the hospital for tests to see if his can provide an organ for transplanting to his ailing mother. It is! And what’s more, his other organs are good matches for two other patients within the hospital – but he needs those to stay alive. The utilitarian doctors have performed the felicific calculus, seize him and sacrifice him for the benefit of the others!
This is similar to the Utility Monster problem in transferring utility but it operates in the opposite direction – trading the high utility of an individual for the greater good.
One problem here is that the moral calculus has only been applied in a narrow sense. The consequences extend far beyond the operating theatre. Once news of this event became public, there would then be a strong disincentive for people to make organ donations, or even to visit hospital. Perhaps utilitarian physician-politicians would then look to prisoners or a specially-bred underclass for their supply of spare body parts.
The problem here again is that there is no regard for ownership of the utility. It is just lumped together. This may be compatible with ‘State Consequentialism’ where the wellbeing of the state is paramount. Individuals are just components of a greater state with no intrinsic worth themselves. But for most moralities, there is the feeling that one person’s happiness (and in extremis, their life) is not anyone else’s to take away.
E: Owning Action: Jim, Pedro and the Indians
Bernard Williams provided the following counter-example as a criticism of Utilitarianism. Jim, a lone botanist exploring in war-torn South American country finds himself in a junta-controlled town. Pedro, the local army captain presents him with 20 insurgents and the following proposition:
“I am going to execute all these rebels to serve as an example to the rebels. But, if you kill one of these, I will grant mercy on the others.”
Clearly in this case, the local moral calculus is easy. Earlier counter-examples have involved a trade-off between the utility of individuals, with winners and losers. But this is a straight choice between a bad outcome and a worse one.
So why is Jim uneasy about saving 19 insurgents? Because, of course, it is him that has to kill the unlucky rebel rather than Pedro. It makes him complicit. (A minor consideration: it reduces his personal utility in that he would ‘feel bad’ about this but, in the moral calculus, saving 19 lives trumps someone ‘feeling bad’? There is something beyond ownership of utility. In morality, ownership of action is also seems to be important.
But: a trivial counter-counter-example: Imagine a librarian who never imposed fines for books that were over-due or never returned. Someone has to dish out punishment if we are to climb down from our little local maxima to get up onto the moral uplands.
F: Owning Intention
Recall that Utilitarianism is a consequentialist moral theory which means that whether an action is good or bad is determined solely by the consequence of an action i.e. determined by the future state of the universe (or, future location within the moral landscape). Hence it does not depend on the intention of the agent in acting a particular way. It seems strange to:
- Judge the consequence was bad even though agent’s action was bad if the agent’s intention was good but the consequence was bad, and
- Judge the agent’s action as good if the agent’s intention was bad but the consequence was good.
So finally, here are two more counter-examples, from the ‘Socratic Society’ blog post.
Firstly, that of a good intention leading to a bad consequence:
You are walking home along an alleyway next to the art museum. You see a man in front of you struggling to put a heavy bag into his van. You decide to help him out. Together you easily lift the bag and it slides in. The man thanks you. Later at home, the newsreader on the television says “A priceless artefact was stolen from the art museum today”
And secondly of a bad intention leading to a good consequence:
A man decides to kill his wife. He gets a gun and shoots her in the chest. She is rushed to hospital. On a scan it is revealed that the bullet missed any vital organs, but a cancer is discovered in her lung. At this early stage it is easy to heal, and the treatment saves her life.
But note: whether the consequence is good or bad depends on who gets to choose what the utility function. For example, the thief would presumably think that the good intention led to a good outcome, unlike his helper.
Regardless, these examples do highlight that it does seem to us that the intention of the agent should play some role in whether their action is good or not.
These counter-examples to utilitarian have revealed problems of aggregating individuals’ utility and problems of ownership. The aggregating is not just about pooling the utility of each individual. Who owns the utility also matters. But the ownership of action and intention also seem to be important.
Elsewhere, the combination of intention and action have been used to explain our feeling of conscious will. Here in this introduction to ethics is the only mention there is to anything like ‘free will’. (For more, see the post on ‘Intention, Action, Will’ which was part of the ‘Free Will / Free Won’t’ talk.)
To summarize, there’s more to overcome with any consequentialist moral system than:
- Determining what is to be maximized, and
- Determining how to use this maximization in practice.
Ownership, or agency, is important.
To be continued. Coming up: Prospectarianism