Moral Equalization

This is the third part of the ‘From Neural Is to Moral Ought’ series of talks. Here, I relate the engineering principle of ‘equalization’ to morality, for reasons which will hopefully become apparent in later parts.

11: Some Good Reason

The inspiration for this ‘From Neural ‘Is’ to Moral ‘Ought’’ talk is Joshua Greene’s short paper of the same name. In this talk, I am wanting to get passed the popular neuromania to discover how our ongoing discoveries about the brain might affect how we should behave.

Joshua Greene’s paper introduces a moral dilemma due to Peter Unger that considers two scenarios. The first:

“You are driving along a country road when you hear a plea for help coming from some roadside bushes. You pull over and encounter a man whose legs are covered with blood. The man explains that he has had an accident while hiking and asks you to take him to a nearby hospital. Your initial inclination is to help this man, who will probably lose his leg if he does not get to the hospital soon. However, if you give this man a lift, his blood will ruin the leather upholstery of your car. Is it appropriate for you to leave this man by the side of the road in order to preserve your leather upholstery?”

And the second:

“You are at home one day when the mail arrives. You receive a letter from a reputable international aid organization. The letter asks you to make a donation of two hundred dollars to their organization. The letter explains that a two-hundred-dollar donation will allow this organization to provide needed medical attention to some poor people in another part of the world. Is it appropriate for you to not make a donation to this organization in order to save money?”

The two scenarios have many similarities but we make very different judgements about them:

“In both cases, one has the option to give someone much needed medical attention at a relatively modest financial cost. And yet, the person who fails to help in the first case is a moral monster, whereas the person who fails to help in the second case is morally unexceptional. Why is there this difference? About thirty years ago, the utilitarian philosopher Peter Singer argued that there is no real moral difference between cases such as these two, and that we in the affluent world ought to be giving far more than we do to help the world’s most unfortunate people. (Singer currently gives about 20% of his annual income to charity.) Many people, when confronted with this issue, assume or insist that there must be ‘some good reason’ for why it is alright to ignore the severe needs of unfortunate people in far off countries, but deeply wrong to ignore the needs of someone like the unfortunate hiker in the first story.”

Thought experiments are there to question our intuitions; to make a point.  The point of Unger’s example is the proximity of the situation. We may feel there is ‘some good reason’ why we are morally required to act locally but not globally but this does not make it right and we have difficulty in rationalizing this into a coherent moral framework. The thought experiments are created to try to uncover our (sub-conscious) intuitions that may be guiding our actions in order to ask: is this an unwitting source of our moral justifications?

Greene then makes his point:

“Maybe there is ‘some good reason’ for this pair of attitudes, but the evolutionary account given above suggests otherwise: we ignore the plight of the world’s poorest people not because we implicitly appreciate the nuanced structure of moral obligation, but because, the way our brains are wired up, needy people who are ‘up close and personal’ push our emotional buttons, whereas those who are out of sight languish out of mind. This is just a hypothesis. I do not wish to pretend that this case is closed or, more generally, that science has all the moral answers. Nor do I believe that normative ethics is on its way to becoming a branch of the natural sciences, with the ‘is’ of science and the ‘ought’ of morality gradually melding together. Instead, I think that we can respect the distinction between how things are and how things ought to be while acknowledging, as the preceding discussion illustrates, that scientific facts have the potential to influence our moral thinking in a deep way.”

12: Trolleyology

One of the most notorious moral thought experiments is the ‘Trolley Problem’. Much is said of this elsewhere but the briefest of summaries is:

  • The first part, originated by Philippa Foot: People must choose whether to divert a runaway ‘trolley car’ (streetcar, tram) down a side track in order to kill just 1 person on that line rather than the 5 on the main line. People generally say yes.
  • The second part, originated by Judith Jarvis Thomson is a variation of this. Instead of the one man being on the other track, he is on a bridge over the main line. People must choose whether to push the (fat) man off the bridge in order to stop the trolley car to save the other 5, or not. People generally say not.

The first part is a seemingly straightforward Utilitarian maximization of happiness (or  minimization of pain). The second part brings in ownership of action. This thought experiment has spawned so many variations that the study of it or variations of it even has a name – ‘trolleyology’! Support for taking action to change the course of events varies from person to person, but not significantly from one culture to another. And people will provide different responses depending on:

  • who the 5 people on the mainline are,
  • who the 1 person on the side track or one the bridge is, and
  • the mechanism by which the 1 person on the bridge stops the trolley (e.g. pushing the person versus opening a trapdoor).

It is found that taking action is increased if:

  • The 5 people on the mainline are young, attractive, of excellent character or prospects (e.g. working on a cure for cancer) or a friend or relative,
  • The 1 person on the side track or the 1 person on the bridge is old, unattractive, of reprehensible character (such as the saboteur of the brakes of the trolley car), with limited prospects (e.g. with a terminal illness) or a stranger.
  • The trapdoor is used rather than pushing them (it is less ‘up close and personal’).

(One example of a paper quantifying some of these variations is here.)

And there are some interesting combinations. For example, people are more likely to push one brother to save 5 brothers than to sacrifice 1 stranger to save 5 strangers. And there are many other factors that play a role here such as race and political leaning.

So it appears that:

  • we unconsciously value some people more than others, even if we consciously deny it, and
  • we feel more justified in our actions when those that we value highly are involved.

But just because we have intuitions that we should behave in a particular way does not mean that we really should. (After all, ‘you can’t get from an ‘is’ to an ‘ought’’.)

Incidentally, you can test your own intuitions at:

The danger is that we will rationalize our intuitions; we will use reason to justify that we should act in a particular way because we do act in a particular way. If we really think that we should treat everyone equally, then our intuitions are a problem.

Additionally, the trolley problem reveals other moral problems. After all, if morality should just be about maximizing ‘the greatest happiness for the greatest number’ then, instead of you pushing the fat man, why don’t you jump onto the tracks to save the other five? Our intuitions suggest that ownership of action (culpability) should also play a role. And maybe (but not necessarily) they should.

13: Equalization

If we think we ought to treat everyone equally then we have a moral problem here. There is something within us that is preventing us from doing so. Those intuitions are undesirable and what we ought to do is ‘equalize’ them.

Equalization is an engineering solution to a physical problem. If we want to get rid of some non-ideality then we can try to equalize it.

Linn Sondek turntable

Linn Sondek turntable

A classic example of equalization is a hi-fi system (very 20th Century!), in which there are many non-idealities. Just some of these are:

  • In order to squeeze as long a recording onto the vinyl (to make them ‘long playing’), low-pitch sounds are deliberately reduced in volume when making the master, before pressing the record.
  • The music is distorted by the power amplifier and loudspeakers because the components used in building the amplifier (valves/transistors) do not have linear characteristics.
  • Rather than listening to is in an acoustically designed auditorium, it may be played in a room with furniture placed it in locations based on various practical considerations, without regard to their acoustic effect!
RIAA curve

RIAA equalization curve (blue): amplify low frequencies; attenuate high frequencies

To make the turntable play back something much more like the original recording, we need to equalize the signal from the phono cartridge. Low frequency sounds need to be amplified by more than high frequencies. The distortion deliberately introduced before pressing, for any record, needs to be compensated, by every turntable. The agreed industry standard, the RIAA standard, defines precisely how this equalization should be done.

Early hi-fis had ‘bass’ and ‘treble’ controls to adjust the relative volumes of low and high frequencies (with respect to middle frequencies) to help equalize other distortions. Later hi-fis often had more than these 2 controls – the ‘graphic equaliser’ had a whole series of slider controls (the only thing graphic about it was that the slider settings visually marked out the frequency response as on a graph).

Wikipedia

A 15-band stereo graphic equalizer

Likewise, there are non-idealities in the members of society that we cannot change but that we think are wrong: the innate, evolved subconscious behaviour of people. We have seen how people intuitively value some people differently from others, for various reasons. If we lined up all the people in the world and quantified how much we valued them, we would end up with a non-linear graph. Illustrating with a rather smaller sample set:

EqValue

Valuing different people, v(p)

If we really believe that all people should be treated equally, then we should compensate for our subconscious biases:

EqEqualizer

Equalization function for different people, e(p)

When we apply this equalization, the overall effect is that we get what we think things ought to be:

EqOverall

Equalized overall effect of valuing people, v(p).e(p)

Note that this equalization will have to be done consciously. This utilitarian-like quantification is of course more than extremely difficult. In practice, we have various rules in society to help us (perhaps acting unconsciously to us):

  • Business contracts cannot be awarded to companies in which friends and family have a significant interest.
  • Exam papers do not have the students’ names on them.

Hence, in order to try to overcome the undesirable, unmodifiable aspects of the members of society:

  • We need to design society
  • We need to engineer society

This is moral equalization.

14: Conclusion

The previous section (‘Prospectarian’) took the simple moral notion of Utilitarianism, linearly adding up the happiness of different people and added in non-linear effects to make the overall system (society) work better.

Here, we have gone in the opposite direction. We find (from science) that there are irregularities (non-linearities) that we want to compensate for, to iron out, to make linear.

This equalization is an example of going from an ‘is’ to an ‘ought’:

  • ‘is’: people have subconscious biases, therefore
  • ‘ought: we ought to compensate for this.

But this is a hypothetical imperative, conditional on a goal. The implicit goal predicating here is that we want to treat people equally. It doesn’t really provide us with a ‘moral truth’. It doesn’t say what we should ought but, given a goal, it tells us what we should do to achieve it. This is a practical measure.

And this is not going ‘from neural ‘is’ to moral ‘ought’. It is going ‘from psychological ‘is’ to moral ‘ought’. Things only becomes neural when we relate it to the underlying physical structure of the brain. This comes later on. But before that, there are some more foundations that need to be laid that are of a traditional philosophical nature. I finally turn to our inability to predict.

(I could have been substantially briefer here and just said that we need to compensate for our biases, rather than get into technical details of equalization, but the point of this technicality should become apparent later on.)

The next part: Rules, hierarchy and prediction.

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , . Bookmark the permalink.

6 Responses to Moral Equalization

  1. Pingback: Caring | Headbirths

  2. Wyrd Smythe says:

    I’m catching up on your blog, and perhaps you’ll address this later… I wonder if one difference between the hiker and the charity ask is that helping the hiker is a “one-off” situation. The charity ask opens the door to all other global needs. If Charity A, why not Charity B, C, D, etc.? (This is something I struggle with in my own donations to charity. How to pick where the money goes.)

    Sometimes — earthquakes, for example — there is a singular nature to the ask, and I suspect many of us dig a little deeper in those situations.

    • headbirths says:

      Yes. I might pass on the charity donation with the expectation that the charity’s beneficiaries have 100 million chances of help from other people donating, whereas the number of cars that might pass the hiker limits him to just a few chances. That difference seems to make the driver’s actions more significant.

      Also, Greene talks about ‘up close and personal’ but ‘personal’ is not the same as ‘proximate’. The beneficiary could be a stranger on the other side of the world but the situation could be the same. A fanciful example: a message in a bottle washed up on my shore, pleading for help complete with GPS coordinates of the desert island. The sender, a complete stranger, only really has one chance. The ‘personal’ bit seems to be about how much

        specifically

      I can influence the situation – the relationship between a specific helper and a specific ‘helpee’.

      Since this posting, I’ve started (and haven’t yet finished) talking about Oxytocin.

      • Wyrd Smythe says:

        The message in a bottle example is clarifying. As you mention, ‘proximate’ is not the key. It’s the degree to which one can affect the situation. The flip side, I suppose, is the danger of one million people all deciding they don’t need to participate this time.

  3. Pingback: The Great and the Good | Headbirths

  4. Pingback: Some Good Reason | Headbirths

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s