Others, Orders and Oughts

(This is the eighth part of the ‘From Neural Is to Moral Ought’ series.)

34: The Purpose of Moral Theories

The three main moral theories provide us with a mechanism for determining what we should do, how we should behave. They are normative but they do not tell us why we should do them:

  • For Utilitarianism: Why should we be happy? Why should we help others to be happy too?
  • For Deontology: Why should we do our duty? Why should we be rational and logical?
  • For Virtue Ethics: Why should we be good? Why should we be virtuous?

Why should we adopt any one of the three theories over and above the others? And with my synthesis of the three theories, why do I claim that it is better than any of the three in isolation? Why should we adopt any of these formalizations at all? Why should we not just have ‘anything goes’ where everyone just does  whatever they want?

We can say that we should try to be happy / help others / follow rules / be logical because we value happiness / others / rules / logic. But this does not help us as we can just ask why we should value happiness / others / rules / logic?

In all the moral theories presented, there is no explicit purpose for why we should behave in a particular way (there is no ‘teleology’):

What is the purpose of morality?

If there was an explicit purpose then we could create a moral system that served that purpose. Historically, that purpose has supposedly been defined by some Divine Being. That Divine Being commands us; belief in that Being makes those commands imperative. But how do we determine a purpose if we have no belief in such a being? All too often, that purpose has been determined by a substitute for the Divine – an all-too-human Supreme Leader. And in some cases the purpose is determined (separately or collectively) by the individuals.

For that last case, in the normal understanding of morality, the moral agents are all similar – similarly human – and therefore have similar needs and will tend to have similar desires. That most individuals want well-being for themselves and for their kith and kin then means that an ‘ultimate purpose’ effectively emerges from the masses. But although individuals may want to do something, it does not mean they should do it. For that, we would need to know:

What is the purpose of a human being?

Regardless of that, there will be conflict within society:

  • Not all individuals do want the same, leading to conflict.
  • And all wanting the same (well-being for themselves rather than others) will also lead to conflict.

Conflict can be resolved in various ways:

  • ‘3-partite’: With an impartial third party judging,
  • ‘2-partite’: Just by the conflicting agents themselves: possibly amicably, possibly violently, or
  • ‘1-partite’: Just by each agent independently.

Elaborating on that perhaps strange last option, if an agent is conditioned to make judgements themselves on the conflicts then conflicts can be internalized – deliberated on hypothetically by an individual, before conflict actually arises externally. (Like ‘Popperian creatures’ but the conflicts ‘die’ in the head, never materializing.) This minimises recourse to the other two options, thereby:

  • Reducing violence within the group.
  • Alleviating the paralysis of requiring third parties to make decisions.

The ‘1-partite’ first option of resort can then be seen as the source of morality. Morality concerns agents that perform actions within a shared environment; those agents are autonomous (they can think and act independently of one another). Then:

The purpose of morality is to provide a mechanism of self-regulation of individuals’ wants.

Each agent does not have to be told what to do at every point in time.

An individual may be aiming towards some end purpose themselves but this is not of direct concern for morality. Without others, there is no morality.

To act morally means to think of others.

And specifically, the wants of others.

The question of

What is the purpose of morality?

is then shifted to that more fundamental issue:

What is the purpose of a human being?

Human beings still need to determine the purpose of being human themselves – somehow (individually, collectively or otherwise). But that is not necessarily the concern of morality. Morality is then only concerned with how individuals should balance the wants of others with the wants of the individual.

35: Orders of Intentionality

Consider the following progression :

  • First order: ‘I think x’.
  • Second order: ‘I think A thinks x’.
  • Third order: ‘I think A thinks B thinks x’.
  • Fourth order: ‘I think A thinks B thinks C thinks x’.
  • and so on

…in which the verb ‘think’ can be replaced variously with others such as ‘wants’, ‘knows’, ‘believes’, ‘understands’ or such like.

To provide a concrete example, Robin Dunbar refers to Shakespeare’s ‘Othello’, in which we can consider the progression:

  1. Desdemona loves Cassio.
  2. Othello believes that Desdemona loves Cassio.
  3. Iago wants Othello to believe that Desdemona loves Cassio.

(For the source of this progression, Dunbar refers to Daniel Dennett’s ‘The Intentional Stance’ which refers back to James Cargile’s “A Note on ‘Iterated Knowings’”.)

Sandeep Gautam presents these orders of intentionality in a nice coherent way. Abridging:

  • Zeroeth order intentionality: Having knowledge but no ‘awareness ‘ of knowledge. There is representation of information, but no meta-awareness of that representation. Computers and machines are assumed to have this lack of intentionality, wherein they have ‘facts’ about the world, but no beliefs, desires etc. of their own.
  • First order intentionality: Awareness of knowledge that is distinct from mere knowledge. This constitutes a belief system – knowing that something you know may be incorrect from the actual world scenario. You know what you know and you know what you don’t know. A limited ‘You know as I know’ may be covered at this order as one may be aware of other people as being intentional agents, but whose beliefs are congruent with one’s own! ‘You know something that may be different from what I know’ is not possible yet.
  • Second order intentionality: Awareness of a belief-system that is distinct from the belief system itself. This constitutes a ‘Theory of Mind’. You know that someone else may know things differently from both (i) as they are and (ii) as you think they are. There is an awareness that others have a mind or a belief-system and there is an ability to keep at least two different belief systems in the mind- one of your own and the other of another third person. But there is still no awareness that they have a Theory of Mind too!
  • Third order intentionality: Awareness of a Theory of Mind that is distinct from the Theory of Mind itself. There is a communicative intent. There is a knowing that someone else may have different views regarding what you yourself believe and thus you can communicate your internal beliefs to others so that there is common ground on which communication can proceed. this also enables grounds for lies and deceptions in the sense that one can deliberately lead someone to believe what one oneself does not believe.
  • Fourth-order intentionality: Awareness of a communicative act that is distinct from the communicative act itself: The capability to tell and understand a narrative. An understanding of ‘roles’. A limited awareness that others are also communicative agents, but not a full awareness that, like oneself, they are also acting a script or playing a role.
  • Fifth order intentionality: Awareness of roles and narratives that are distinct from the role or narrative. An organizing system of religion/ myths using which one interprets stories. Awareness that others too have their own narratives and are performing their roles. Awareness that one’s role/stance/understanding of world can be radically different from someone having the same experiences but using a different interpretation.

36: Morality and Reputation

Morality

It was said previously that morality concerns others:

To act morally means to think of others wants.

To be able to think of others is only possible at the second-order of intentional complexity or higher:

‘I know that what you want may be different from what I want’.

With second-order intentionality there is a ‘theory of mind’ – the acknowledgement that others have intentions separate and hence possibly different from oneself’s. Below the second order there is no such recognition and so there cannot be any moral conflict. Others simply don’t count.

The Golden and Platinum Rules

Note that the Platinum Rule:

Do unto others as they would want to be done by

requires second-order intentionality whereas the Golden Rule:

Do unto others as you would want to be done by

is just a first-order rule. Others count but what others want doesn’t.

If you have at least a second order of intentionality, the Golden Rule is only an approximation of the Platinum given that (1) you are unable to ascertain what others want and (2) you and they are sufficiently similar. (If it is possible to communicate with them, you could of course just ask them to find out what they want.)

Reputation

With second-order intentionality, the inferred wants of others could be used to direct action but it is not clear why they should direct action (let alone in which particular way).

But with third-order intentionality, the following becomes comprehensible:

‘I think that you think that I think x’.

and so the benefit of taking the inferred wants of others into account becomes understandable to an agent:

‘I can understand that someone else can direct action based on what they think I am thinking.’

Just as I modify my behaviour in response to the intentional behaviour of others (a second-order phenomenon), I am aware that others will perceive my intentional behaviour and modify their behaviour accordingly.

I understand:

  1. Actions by myself that are good for me and
  2. Actions by others that are good for them but bad for me.

I can then appreciate that actions by myself that are good for me may be bad for others and that they may well treat me adversely in the same way as I treat such agents adversely myself. In short, my reputation matters.

Morality is a second-order phenomenon; reputation is a third-order phenomenon

Reconciling Actions

In order to determine action, an agent must reconcile competing proposals:

  • First order: proposals of the type ‘I think x therefore y’ must be reconciled with one another.
  • Second order: proposals of the type ‘I think you think x therefore y’ must be reconciled with one another and with ‘I think x therefore y’ type proposals.
  • Third order: proposals of the type ‘I think you think I want x therefore y’ must be reconciled with one another and with ‘I think you think x therefore y’ and ‘I think x therefore y’ type proposals.

The appropriate consideration of the inferred wants of others makes the action moral. This consideration is a skill that must be learnt.

An individual must judge potential actions by balancing:

  • their own wants now against those of the future,
  • their own wants now against those of others now, and
  • their own wants now against those of others in the future.

An awareness of the thoughts of others on oneself has an impact on oneself and enables the cultivation of reciprocity, goodwill and cooperation.

This is a genuine balancing act. To always act in one’s own self-interest would not be moral. But to always submit to others would be self-enslavement. Neither will be the most effective way of achieving one’s self-imposed human purpose within society. In particular cases you (or a group of which you are a member) will assert your will over others; in other cases you will submit. In both cases, this is to best serve your end purpose ultimately.

Problems arise when one group of agents are much more powerful than others. They are not dependent on others now and they do not expect to be so in the future. They do not require the agreement of others. But to disregard the wants of others (not giving respect to the wants of others) make their actions immoral.

However, even if we do try to balance our wants with those of others, the problems remain:

  • To what extent should an individual assert their wants over those of others?
  • To what extent should a group assert their wants over those of wider society?

Looking at the three main moral theories again:

  • Deontology provides no guidance for ‘perfect duties’ (only hard boundaries) and no guidance for ‘imperfect duties’.
  • Utilitarianism considers others but action is based on our judgement of what they want.
  • Virtue Ethics, as usual, provides no specific guidance, but most closely matches what is required…

Our judgement of the balance is derived from our experience within an environment. Consequently, our judgement depends on the environment. And our wants are also dependent upon our experience within the given environment.

This seems to be taking us towards a ‘moral relativist’ position:

  • Descriptively: I note incidentally that people do in fact disagree about what is moral,
  • Meta-ethically: I acknowledge that that in such disagreements, nobody is objectively right, and
  • Normatively: I assert that we ought to tolerate the behaviour of others even when we disagree about the morality of it.

But more on that later.

Intentional Asymmetry

Specific problematic differences in power arise when intentional capabilities differ:

  • In the land of first-order intentional agents, the second-order intentional agent is king. They can recognize that ‘I know what you don’t know’ for their own advantage.
  • In the land of second-order intentional agents, the third-order intentional agent is king. As has been noted already, it provides the opportunity for deception. They can use ‘I know that you don’t know what I know’ for their own advantage.

As Robin Dunbar has postulated, creatures capable of higher orders of intentionality will be better at competing with others and this explains the evolutionary growth in human brain size.

This asymmetry can turned the other way around though. Moral agents (second-order or higher) can (and normally do) include non-moral (first-order or lower) agents as morally significant (such as infants and animals). But they can exclude other intentional agents, deeming them to be morally insignificant (such as electronic computers or robots). (How we deem agents to be morally significant or otherwise is not dealt with here.)

However, a recognition of non-moral agents as morally significant is not necessarily altruistic. Typically in societies, many of those ‘morally significant’ non-moral agents will grow into moral agents whilst retaining their memory through this process. Hence it is beneficial to the higher-order agents to treat them well in order to maintain a reputation in the future.

Note that similarly, a third-order reputational agent can discriminate between reputationally-significant and reputationally-insignificant agents – ignoring how they are thought of by the latter whilst still recognizing that they are moral. Example: masters and slaves.

Moral and Immoral Growth

Recall the first 3 stages of Kohlberg’s 6 stages of moral development:

  • Stage 1: Infantile obedience to authority.
  • Stage 2: Juvenile interest in the needs of others only insofar as it might help own interests.
  • Stage 3: Adolescent conformance to social standards for approval from others and being well-regarded.

Notice how there is similarity between these stages and the early orders of intentionality in the progression from pre-moral to moral and on to reputation: from ‘x’, through ‘I want x’ and ‘I know you want x’ to ‘I know that you know I want x’.

But as well as this moral growth, there is an equal and opposite opportunity for immoral growth. through to:

‘I know that you don’t know what I know/want’.

37: From Biological Is to Moral Ought

Taking Stock

Summarizing what has been said so far…

Normative moral theories tell us what we should do but not why. Why should we value the things that the moral theories promote? Those moral theories do not have any explicit purpose. In the absence of one or more Gods to serve or glorify, the purpose of a person is unclear. Without solving this issue (leaving this to the individuals themselves to decide their own purpose) the purpose of morality then becomes to provide a mechanism for the self-regulation of individuals’ behaviour to balance the conflicting purposes between them and others. This conflict also applies across time, including between ‘me-now’ and ‘me-in-the-future’.

To act morally means to think of others – to include their wants in your deliberations that decide actions. Without this ability, there is no morality. Furthermore, the ability of others to infer what you are thinking creates the need for an individual to maintain a reputation. Thus, what would otherwise be an externally-imposed regulation of individuals becomes self-regulation.

In an homogenous population of humans with similar basic needs and wants, a purpose for morality will automatically emerge: individuals’ desire for the well-being for themselves and for their kith and kin will lead to a common desire for the well-being of everyone – something very similar to the concept of ‘the maximum of happiness for the greatest number’. This state of affairs could be otherwise; it does not mean that morality should be like this, but that is how it is.

Is and Ought

Normative philosophers may try to stand aloof – to stand apart from the imperfections of the material world by anchoring their theories to (the supposedly pure) domains of logic and reason, either in support of Gods or instead of them. They thereby maintain a clear separation of ‘ought’ from ‘is’.

But, before moving on to look at ‘ethical naturalism’ which rejects this ‘is’/’ought’ distinction and aims towards a ‘science of morality’, I assert that this is’/’ought’ distinction is, at best, only partially possible anyway. At least some material facts must impinge on normative moral theories. A morality that applies to humans has to acknowledge some biological facts.

Morality and Mortality

It is an inescapable biological fact than humans are born, they grow and they die. Before they are born and for some considerable time afterwards, they are not moral agents. As has been said so wonderfully bluntly (in a post  about Robin Dunbar’s Levels of Intentionality):

If you’ve ever wondered why human babies are so utterly useless compared to the get-up-and-go offspring of other animals, it’s because we’re all essentially born 12 months premature. We’ve evolved such gigantic heads (really nothing more than pretty brain cases) that human babies have to be evacuated unfinished at 9 months, otherwise they’d be stuck forever. The rest of a baby’s development has to be finished off outside the womb.

Humans grow into their moral-ness and this growth takes years.

After they die they are no longer moral agents (although some may regard them as being capable of acting from an ethereal realm in the afterlife in some limited capacity). Unfortunately moral capability is sometimes lost for some considerable time before death.

Both before and after the time a human is a moral agent, they are not moral agents although they will be ‘morally significant’ for at least some of the time.

This process is not really acknowledged in any fundamental way by two of the three main moral theories. Virtue ethics only acknowledges it implicitly.

Without mortality, morality could be quite different. Imagine a society of manufactured electronic androids placed into a predefined static environment and that this society flourishes as a result of cooperation through one robot considering the intentions of others. Software (or ‘wetware’) could be downloaded into the androids at the time of manufacture, before its ‘birth’ placement into the shared environment. Contrary to the human example above:

Androids could be evacuated finished and mature.

http://spectrum.ieee.org/automaton/robotics/robotics-software/symbrion_and_replicator_swarm_robot_projects

Cooperating Bots

The correct action in a particular circumstance would apply for every agent and would be time-invariant too. This would then be in keeping with the simple, universal, static moralities that philosophers have traditionally advocated or sought.

Returning to the consideration of humans, those traditional moralities (such as Utilitarianism and Deontology) must somehow accommodate the biological facts of our existence and they must do so with ad hoc measures. Just to provide an example that is absurd, imagine a practical measure for Deontology that could be introduced in which:

  • A person is not moral before their 18th birthday. As such, they should not be harmed just as animals should not be as it reflects badly on those morally-accountable individuals doing the harming. During this time of their youth, they must acquire logic and reason and then apply them to determine the moral rules of behavior.
  • For the rest of their lives, the then apply those rules without exception.

Here is an absurdly strong (overnight) distinction between an almost-sub-human minor and a fully-responsible adult.

In contrast, Kohlberg’s moral development inherently recognizes grow from an immature, pre-moral state, becoming moral in time.

In conclusion, facts ‘in the world’ determine the resulting morality. For humans, we cannot ignore the fact that we are immortal. Moral issues concerning the responsibilities of the young and infirm must be dealt with. Because of our mortality, culture must be a carrier of our morality from one generation to the next so there are moral issues concerning teaching the young.

Next: Ethical Naturalism

Advertisements
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

8 Responses to Others, Orders and Oughts

  1. Pingback: Ethical Physicalism | Headbirths

  2. Pingback: Caring | Headbirths

  3. Pingback: Trust | Headbirths

  4. Pingback: Empathy | Headbirths

  5. Pingback: Guilt and Shame | Headbirths

  6. Pingback: My Brain Made Me Do It | Headbirths

  7. Pingback: The Great and the Good | Headbirths

  8. Pingback: Some Good Reason | Headbirths

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s