Ethics 101

This ‘Ethics 101’ introduction is the first part of the ‘From Neural Is to Moral Ought’ series of talks, which lays some of the foundations for subsequent parts:

  • The Is-Ought problem,
  • Utilitarianism as an example moral system, and some of the problems with this.
  • Utilitarianism as an optimization problem.

1: Introduction

The title for this talk comes from Joshua Greene’s paper – ‘From Neural Is to Moral Ought’ makes the connection between what is inside our heads and how we should behave. And there are others, most prominently Patricia Churchland and Sam Harris, that are also writing about this relationship.

In asking specifically about how neuroscientific knowledge affects our ethics, we cannot avoid the philosophical problem of how any knowledge has an affect on our ethics. I start off by looking at this before then presenting an ethical framework based on our understanding of how the brain works.

With increased knowledge of how our brains do what they do, we can wonder about consciousness and concern ourselves about living in a deterministic world devoid of free will. Here I largely ignore these topics (I have started dealing with them elsewhere in ‘Could Androids Dream of Electric Sheep?’ and ‘Free Will/Free Won’t’). But I finish the talk by looking briefly at some practical implications of applying our new-found knowledge on society.

2: Is and Ought

In its most abstract terms, that branch of philosophy called ethics is about the rights and wrongs of the actions of agents within a shared environment. In more everyday terms, it is (normally) about how people should behave.

Conventionally, there are four major sub-disciplines within ethics:

  • Descriptive ethics: what do people think is right (and wrong)?
  • Normative ethics: what is the right (and wrong) way for people to act?
  • Meta-ethics: what does right (and wrong) mean?
  • Applied ethics: how do we apply the above to specific problems?

A basic starting point within meta-ethics is the ‘is-ought problem’ – David Hume’s identification of the difficulty in getting from descriptive statements about the world to normative, moral statements of how people should behave. This severing of the connection between ‘is’ and ‘ought’ is called ‘Hume’s Guillotine’ and is commonly summarized as:

“You can’t derive an ‘ought’ from an ‘is’.”

Related to this is G. E. Moore’s ‘naturalistic fallacy’ –it is a fallacy that morality should follow nature. This is often used to counter evolution-infused claims. An extreme example: just because the female praying mantis devours her mate during intercourse, it doesn’t mean it’s OK for you to! (There is also its opposite, the ‘moralistic fallacy’ – it is a fallacy that nature should follow morality, such as trying to provide a moral explanation for sexual cannibalism.)

However, in everyday life, we reason from ‘is’ to ‘ought’ all the time. For example, we might make a statement:

“I have an exam tomorrow therefore I ought to study tonight.”

The ‘is’ statement (having an exam) and the ‘ought’ statement (studying tonight) are explicit. But there are various implied statements in this:

  • goal/purpose/value: ‘I want to pass the exam’
  • predictions: ‘studying tonight will improve my chances of passing the exam’
  • contingent states: ‘I have not already bribed my way to getting the answers in advance’

We can reformulate the going from an ‘is’ to an ‘ought’ so that the implicit goal/purpose/value is made explicit, without any problem. But those goals/purposes/values are conditional. For example:

  • If I value passing exams then I ought to study tonight.

This is then a (Kantian) ‘hypothetical imperatives’ (as opposed to his more famous ‘categorical imperative’). Similarly, we can reformulate other ‘ought’ statements such as the following into hypothetical imperatives:

  • A mobile phone ought to have a battery life of at 16 hours.
  • A good terrorist ought to have no problem in being able to detonate improvised explosive devices (IEDs) remotely using a couple of stolen mobile phones.
  • The undercover agent ought to have supplied the terrorists with detonators that were defective.

But this is just playing around with the words to clarify. The remaining essential problem is in the choice of those values /goals / purposes:

  • Is it good to pass exams?
  • Is it right for you to bribe someone to get the answers to the exam in advance?
  • Is it right to steal a mobile phone?
  • Should the terrorist be detonating the IED?
  • Should the agent supply good equipment that will result in people getting killed, in order to build up trust with the terrorist organization?

This breaks down into two big issues:

  • Where do we get to values from?
  • How do we resolve the problem of when values conflict?

It is conflict that lies at the heart of moral problems:

  • Conflict between agents.
  • Within an agent, choosing between values: to go to the beach or to stay at home studying?
  • Conflict that involves uncertainty: to do a moderate good or risk doing something that would be really good?

And the setting of values can be:

  • Individually chosen: either principled (reasoned) or arbitrary,
  • The individual can submit to a higher authority (Note: Arabic ‘Islam’ = ‘submission’), or
  • The individual can be involved in making collective decisions with others e.g. democratically.

Ultimately, regardless of the case made and all its appeal to reason, Hume’s Guillotine can always be used to cut the connection from ‘is’ to ‘ought’. We can go to extremes in imagining a scenario that virtually everyone would say was bad, but still disconnect the ‘is’ from the ‘ought’: You may say I ought not trigger the serum release that will cause the agonizing death of millions but:

  • Why is that a bad thing?
  • Why should I do the good thing?
  • After all, you can’t get from an ‘is’ to an ‘ought’!

This is an example of ‘Moral relativism: we cannot condemn or criticize the behaviour of others because what people consider right and wrong is entirely shaped by the traditions, customs and practices of their culture (Note: the name ‘ethics’ is derived from the Greek word for custom/habit; Marvin Bower defined culture simply as ‘the way we do things around here’).

In contrast, ‘Moral realism’asserts that absolute, universal moral truths, irrespective of circumstances, exist. Realists that are ‘ethical naturalists’ assert that we can go from ‘is’ to ‘ought’ in more than the basic, goal-orientated way – that morality can be reduced to facts in the natural world i.e. science. Other (‘non-naturalist’) realists will maintain the distinction between ‘is’ and ‘ought’, with reason being the source of morality, irrespective of how the world is.

Going back to relativism, compare this situation of making a distinction between ‘is’ and ‘ought’ with that I made in the previous talk, ‘The Science Delusion’, in which I tried to maintain a distinction:

  • between the metaphysical and the methodological,
  • i.e., between ontology and epistemology,
  • or simply: between ‘is’ and ‘knowing’

Making the distinction may be useful, but it wasn’t as clear cut as that. And similarly, it is not clear-cut between ‘is’ and ‘ought’. Just as a devout scepticism leaves us knowing nothing, in ethics, the is-ought distinction leaves us ‘oughting’ nothing. We can and do go from ‘is’ to ‘ought’ in practice – in more than the indirect goal-oriented mechanisms described above.


From ‘knowing’ to ‘ought’ and from ‘ought’ to ‘knowing’

By the way, following on from the previous talk, I would try to maintain the ‘is’/’knowing’ distinction in that we don’t go directly from ‘is’ to ‘ought’ but from ‘is’ to ‘knowing’ (from the metaphysical to the epistemological) and then from ‘knowing’ to ‘ought’ (from the epistemological to the ethical) – from the chicken to the egg in the picture below.

3: The Moral Landscape

As previously stated, ‘normative ethics’ is that sub-division of ethics that asks what is the right (and wrong) way for people to act? There are three major moral theories within normative ethics:

  • Deontology: this emphasizes the adherence to rules in determining the goodness of an action.
  • Consequentialism: this emphasizes the consequences of an action in determining the goodness of the action.
  • Virtue Ethics: this emphasizes the goodness (good character) of the agent performing the action (the agent’s goodness).

Consequentialism is often summarized by the saying:

“the ends justifies the means”.

For the dominant Consequentialist theory, Utilitarianism, that ‘ends’ is happiness. Jeremy Bentham, the founder of Utilitarianism famously talked of:

“the greatest happiness of the greatest number that is the measure of right and wrong”.

Bentham created a ‘Felicific Calculus’ as a conceptual way of quantifying this happiness. In somewhat modified form, we might express this as:

Uformula

U is the sum of the net happiness of all Ptotal people on the planet (7.24 billion and counting) resulting from a particular course of action, dependent on Ii(t), the intensity of the pleasure/pain occurring for person i (positive for pleasure; negative for pain) from now until eternity (notwithstanding that this ignores the additional people born during this time).

Let us say I am accosted by a beggar asking for some spare change. I could plot a graph of U versus money given. $1 might produce some upward movement of Ubeggar and the negative movement of Ume might be negligible, offset by my smug feeling of benevolence. This results in a net increase in total U. By this measure, it would be a ‘good’ thing. $2 might increase this further. At $1000, the beggar has gone off and mis-spent his gains and told his friends which has in turn contributed to a culture of dependency and I am no longer feeling so smug. U has turned negative.

http://www.soest.hawaii.edu/wessel/papers/1994/JGR_94/Fig_2.gif


A simple landscape: Cross Seamount, South-West of Hawaii

We could bring in another axis – of money I additionally give to a local charity. $5 along this axis might produce a higher total U than $5 on the other axis. We would end up with a 3-D ‘landscape’.

Since there are generally many many more than 2 options available, we actually have a ridiculously high-dimensional graph but for the sake of simplicity and our ability to visualize, I will stick with 2 options and a 3-D graph.

As does Sam Harris in his book ‘The Moral Landscape’ that essentially has the very same framework:

“Throughout this book I make reference to a hypothetical space that I call “the moral landscape” – a space of real and potential outcomes whose peaks correspond to the heights of potential well-being and whose valleys represent the deepest possible suffering. Different ways of thinking and behaving – different cultural practices, ethical codes, modes of government, etc. – will translate into movements across the landscape and therefore, into different degrees of human flourishing.”

https://headbirths.files.wordpress.com/2014/06/d1ac4-moral-2.jpg

Sam Harris’s The Moral landscape

Note that this is all very conceptual. It completely ignores our practical inability to predict consequences to ourselves in the short term let alone all others for all time. And it completely ignores what those functions Ii, Pi and Dt are – or rather, who determines them, such as:

  • you,
  • me,
  • an ‘ideal observer’
  • the state, or
  • some aggregration of everyone

Utilitarians want the functions to measure happiness. Other consequentialists agree that it is the consequences that matter in morality but they want to optimize something different. For example State Consequentialists want to maximize the well-being of the state, not of individuals within the state.

The point I am wanting to make here is that, regardless of whatever it is we’re wanting to measure, we have an optimization problem. We don’t necessarily need to find the maximum across the entire space but we want to find places higher than where we are. We want to find better places in the landscape. Note: here I talk of a ‘maximization problem’ – that of maximizing utility, but the problem could be inverted to be dealing with a minimization of pain (here, pleasure is whatever is wanted – and could of course be pain instead of pleasure if we were so inclined and it was up to us to define the utility function).

Finding ‘better’ places in a landscape might seem easy.

This is true if the landscape we are dealing with is simple –analogous to an island with a single hill on it. A simple ‘always go uphill’ strategy will be effective. But in the real world, the moral landscape can be highly complex – more like the fjords of Norway – and we need better strategies.

https://eosweb.larc.nasa.gov/sites/default/files/project/misr/gallery/norway_coast.jpg

A more complex landscape: the fjords of Norway

In practice, finding significantly better places in highly-dimensional spaces is extremely difficult. We cannot see where is better – it is a bit like having the landscape covered in a thick fog. We cannot survey the landscape with our eyes. The problem with following a simple tactic like ‘always go uphill’ is that we get trapped in a ‘local maxima’. We then don’t have a way of finding significantly better places to be. Sometimes you have to go downhill to reach higher. We might want to go to the beach today, but studying will get us out of our local maxima to a higher peak in good time.

4: The Absurdities of Utilitarianism

The formulaic definition of utilitarianism as the minimization of U…

Uformula

…is, of course, absurdly precise in that we cannot determine good values for the inputs into the formula and, not least, there is our general inability to predict consequent states. But in committing to something so definite, it lays us open to the generation of absurd counter-examples that allow us to dig deeper about what we really think is right.

Here I look at a few counter-examples…

A: Maximizing Total Utility

With maximizing total utility, we can presume that there is some feedback to stabilize the population somewhere between 1 and a huge runaway population. Too high and people are competing for the world’s resources; too low and there are not enough people to sustain vibrant cultures.

But what is the right level? Consider the following…

There is currently a world population about 7.24 billion and rising. Sadly, a fair number of these are suffering from the effects of war, conflict and famine. But imagine that sometime after that, mankind suffers some human-induced apocalypse (be it nuclear, environmental or whatever) and the population plunges. But by the year 2525, the population has recovered to a stable and prosperous 6 million living  – about the population and satisfaction levels of Denmark today (Denmark regularly comes out top or near the top of various world happiness surveys).

Which would actually be better – the situation now or that in the year 2525? The total utility now might be higher than the hypothetical scenario of 2525, just due to the sheer weight of numbers of the people alive today. But is that preferable? Maximizing the somewhat grandly-metaphysical ‘total conscious positive experience’ of the universe, continually adding additional lives of only barely positive net utility is what Derek Parfit has called  ‘the repugnant conclusion’.

Maybe we should be maximizing average happiness rather than total happiness?

B: Maximizing Average Utility

Now consider that, in the year 2525, the leader of the mere 6 million global population announces, with no irony, that it is disgraceful that 50% of the mere 8 million population have below-average happiness! Thus, a program of (painless) euthanasia is introduced to remove the most miserable 1% of the population every year. Over the course of the following years, the average happiness per person is raised.

C: Transferring Utility: The Utility Monster

Robert Nozick has criticized Utilitarianism through his concept of the ‘utility monster’. In this, the population stays the same but one individual accumulates happiness at the expense of others, even though the total (and average) utility increases. The utility monster is able to derive greater happiness from the resources available hence, with maximization, it makes sense to transfer utility to him. For example imagine 10 people each with 1 ‘hedon’ of pleasure (a hedon is a fiction unit of currency for hedonists that subscribe to the felicific calculus).  But one of these is the Utility Monster; for every 1 hedon taken from another, the Utility Monster gains 2 hedons. Thus, maximum utility is achieved when the Utility Monster has 19 hedons and the other 9 people have nothing.

Whilst ‘the greatest happiness of the greatest number’ sounds egalitarian, this counter-example shows that it may not. It doesn’t seem right – it seems too capitalist! It seems to say  that who owns the utility is relevant. In the utility equation,

Uformula

there is an implied ‘+’ with the Σ operator. But it is not necessarily a simple scalar addition that we should be using. Imagine that 10 individuals A, B, C, D, E, F, G, H, J and U all start with one hedon (sum=10 but the ‘utility state’ is [1, 1, 1, 1, 1, 1, 1, 1, 1]). But by the end, U has all the utility (sum = 19 but the ‘utility state’ is [0, 0, 0, 0, 0, 0, 0, 0, 19]. Now clearly:

19 > 10

but it is not necessarily true that:

[0, 0, 0, 0, 0, 0, 0, 0, 0, 19] > [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]).

It might be that we could concoct a different sigma operator that was fairer in sharing utility (a minimum of all is one option – the utility of the group is gauged by the utility of the unhappiest member).

D: Owning Utility: The Hospital Donor

The classic problematic example of Utilitarianism concerns the unfortunate person who goes to the hospital for tests to see if his can provide an organ for transplanting to his ailing mother. It is! And what’s more, his other organs are good matches for two other patients within the hospital – but he needs those to stay alive. The utilitarian doctors have performed the felicific calculus, seize him and sacrifice him for the benefit of the others!

This is similar to the Utility Monster problem in transferring utility but it operates in the opposite direction – trading the high utility of an individual for the greater good.

One problem here is that the moral calculus has only been applied in a narrow sense. The consequences extend far beyond the operating theatre. Once news of this event became public, there would then be a strong disincentive for people to make organ donations, or even to visit hospital. Perhaps utilitarian physician-politicians would then look to prisoners or a specially-bred underclass for their supply of spare body parts.

Uformula

An Application of Utilitarianism?

The problem here again is that there is no regard for ownership of the utility. It is just lumped together. This may be compatible with ‘State Consequentialism’ where the wellbeing of the state is paramount. Individuals are just components of a greater state with no intrinsic worth themselves. But for most moralities, there is the feeling that one person’s happiness (and in extremis, their life) is not anyone else’s to take away.

E: Owning Action: Jim, Pedro and the Indians

Bernard Williams provided the following counter-example as a criticism of Utilitarianism. Jim, a lone botanist exploring in war-torn South American country finds himself in a junta-controlled town. Pedro, the local army captain presents him with 20 insurgents and the following proposition:

“I am going to execute all these rebels to serve as an example to the rebels. But, if you kill one of these, I will grant mercy on the others.”

Clearly in this case, the local moral calculus is easy. Earlier counter-examples have involved a trade-off between the utility of individuals, with winners and losers. But this is a straight choice between a bad outcome and a worse one.

So why is Jim uneasy about saving 19 insurgents? Because, of course, it is him that has to kill the unlucky rebel rather than Pedro. It makes him complicit. (A minor consideration: it reduces his personal utility in that he would ‘feel bad’ about this but, in the moral calculus, saving 19 lives trumps someone ‘feeling bad’? There is something beyond ownership of utility. In morality, ownership of action is also seems to be important.

But: a trivial counter-counter-example: Imagine a librarian who never imposed fines for books that were over-due or never returned. Someone has to dish out punishment if we are to climb down from our little local maxima to get up onto the moral uplands.

F: Owning Intention

Recall that Utilitarianism is a consequentialist moral theory which means that whether an action is good or bad is determined solely by the consequence of an action i.e. determined by the future state of the universe (or, future location within the moral landscape). Hence it does not depend on the intention of the agent in acting a particular way. It seems strange to:

  • Judge the consequence was bad even though agent’s action was bad if the agent’s intention was good but the consequence was bad, and
  • Judge the agent’s action as good if the agent’s intention was bad but the consequence was good.

So finally, here are two more counter-examples, from the ‘Socratic Society’ blog post.

Firstly, that of a good intention leading to a bad consequence:

You are walking home along an alleyway next to the art museum. You see a man in front of you struggling to put a heavy bag into his van. You decide to help him out. Together you easily lift the bag and it slides in. The man thanks you. Later at home, the newsreader on the television says “A priceless artefact was stolen from the art museum today”

And secondly of a bad intention leading to a good consequence:

A man decides to kill his wife. He gets a gun and shoots her in the chest. She is rushed to hospital. On a scan it is revealed that the bullet missed any vital organs, but a cancer is discovered in her lung. At this early stage it is easy to heal, and the treatment saves her life.

But note: whether the consequence is good or bad depends on who gets to choose what the utility function. For example, the thief would presumably think that the good intention led to a good outcome, unlike his helper.

Regardless, these examples do highlight that it does seem to us that the intention of the agent should play some role in whether their action is good or not.

Summary

These counter-examples to utilitarian have revealed problems of aggregating individuals’ utility and problems of ownership. The aggregating is not just about pooling the utility of each individual. Who owns the utility also matters. But the ownership of action and intention also seem to be important.

Elsewhere, the combination of intention and action have been used to explain our feeling of conscious will. Here in this introduction to ethics is the only mention there is to anything like ‘free will’. (For more, see the post on ‘Intention, Action, Will’ which was part of the ‘Free Will / Free Won’t’ talk.)

To summarize, there’s more to overcome with any consequentialist moral system than:

  • Determining what is to be maximized, and
  • Determining how to use this maximization in practice.

Ownership, or agency, is important.

To be continued. Coming up: Prospectarianism

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , , , , . Bookmark the permalink.

14 Responses to Ethics 101

  1. Pingback: Prospectarianism | Headbirths

  2. Wyrd Smythe says:

    Excellent article — I find discussion about morality to be one of the more fascinating topics. One question: do you distinguish between “morality” and “ethics”? (It may be particular — even peculiar — to me, but I consider the former metaphysically-based and the latter rationally-based.)

    It has long struck me (naively, perhaps) that, firstly, “ought” is a uniquely human concept (like “justice”) which has no real analog in nature — suggesting it may depend on whatever makes us separate from animals — and, secondly, that it seems to depend on a framework that sees all humans as somehow equal. That would be one reason against maximizing one person’s happiness at the expense of nine others.

    Metaphysical views see us as all “children of god” (or equivalent) which provides that framework. That seems much harder to accomplish physically and rationally. I’m really looking forward to where you’re taking this!

    p.s. In section 4.A., fourth graph, should the “2025” be “2014”? The text seems to read as if it “ought” to be.

    • headbirths says:

      Thanks for your comments – I’ll try to deal with them all (briefly) in turn…

      I remember an essay question years ago (that I didn’t choose answer) about the difference between morality and ethics but I’m still none the wiser. I suspect that generally people used to make a distinction between the two but no longer do. (A quick google revealed another ‘Ethics 101’ blog that provides a different answer: http://www.ianwelsh.net/ethics-101-the-difference-between-ethics-and-morals). I make no difference (so you should read all references to ‘morality’ and ‘ethics’), but I’m prepared to accept there is a useful distinction.

      I’m reluctant to make any claims about humans being unique among animals as it smacks of the traditional cultural chauvinism that I/my tribe/nation/race are among the ‘chosen’/at the centre of the universe/superior to you/your tribe/nation/race etc. Maybe other great apes have some capacity (albeit limited) to ‘design’ and enforce their environment to be something other that how they naturally find it?

      I have long suspected (not too much thought given to this though) that if there is a universal characteristic of morality it is of equality within one’s peer group. (The problem then is determining who is within one’s peer group.) But I don’t see that equality is than a reason against maximizing one person’s happiness at the expense of nine others.

      At the moment, I don’t really know where this talk is taking me! I only know what I’ll be writing about in the next 4 blogs or so. (And that it will be pragmatic, science-friendly and metaphysical-light.)

      You’re right – the 2025 date seems wrong. I originally hypothesized about an apocalyptic reduction in population in 2025 (this then being the date of ‘peak population’) and compare that against 500 years later (remember the song ‘In the Year 2525’?). But this obviously got lost somewhere along the way. I’ve now editted the post to replace both 2014 and 2025 with ‘now’. Thanks!

      • Wyrd Smythe says:

        I do indeed remember the song, and the dates you used did have me thinking about it.

        The peer group idea is interesting; I foresee some pitfalls (as you suggest) in determining exactly who is in that group. (Obviously, nations, political groups, religions and even sports fans draw a hard line between “us” and “them.”)

        I think one of the great questions (one we may someday answer) is whether the “higher” animals are on a continuum with us or whether there is a definite “gap” between them and us. One of my all-time favorite (serious) quotes is W.G. Sebald’s, “Men and animals regard each other across a gulf of mutual incomprehension.”

        Looking forwards to the next posts!

  3. Wyrd Smythe says:

    BTW: Two additional objections to the story of Jim and Pedro: Jim has no way of knowing Pedro will keep his word, so his participation may not save lives and will reduce Jim’s happiness. And Jim may be drawn into a conspiracy with Pedro if the latter documented Jim’s actions and used them as blackmail.

    This reminds me of “stranded group — insufficient resources” conundrums where it seems the way out is to eliminate some members of the group so that others survive. But this requires certainty that rescue or other solution is not imminent. And usually such certainty is not possible.

    • headbirths says:

      A big problem with these thought experiments is that there is no uncertainty. They are games and you have to play along with them in the spirit in which they are meant. With ‘Jim and the Indians’, the problem could be qualified further to get rid of the uncertainties: (i) you believe Pedro is a man on honour. (ii) you believe you will soon leave the country and never cross paths with Pedro again. And so on.

      I also think the particular problem hasn’t aged well. I think we are supposed to see Pedro as someone somewhat like us but in an extremely difficult moral situation himself. He doesn’t want to kill the indians in cold blood, despite his orders from HQ, but he is fighting in a civil war in a newly-independent former colony against political extremists. More than a generation on, we are now much more likely to sympathize with the indians. ‘Partisans’ seems a better term than ‘indians’. The intention of the thought experiment should be to question or morals (maximization of happiness, ownership of actions) rather than to question our political affiliations.

      • Wyrd Smythe says:

        Good points, all. Synchronistically, my intended next post (in a somewhat silly way) touches a bit on the topic of terrorism, and while it seems clear that terrorist actions are immoral, one also has to ponder the question: What justifies terrorist actions if a group is (a) genuinely oppressed by a much stronger group also practicing evil actions, and (b) the oppressed group seems to have no other alternatives. Per your thought experiment, assume those perceptions are correct.

        At what point (if any) does a small, weak, oppressed group have the “right” to level the playing field using terrorist tactics?

        What makes the question interesting is that the USA arguably began with terrorists rebelling against the King. We tend to hold that we were “right” to do so — The Boston Tea Party, for one example, is considered a rousing victory for our side. Is it merely having won that justifies our actions?

        • headbirths says:

          I had a look for your post on terrorism but didn’t find it. Did you do it?

          It’s taken a while (9 months) responding as I had hoped I would have gone a long way to answering it in my posts. I seem to be getting close to that now. But my short response would be:
          * Ethics is about balancing my wants with yours.
          * This extends to groups – balancing my group’s want with those of your group (or with those of the wider group).
          * As an individual and as part of a group, you have a duty to (i) think about the wants of the other group and (ii) improve your capability of understanding the wants of others in general.
          * There are circumstances where terrorist activities could be justified (to whom? to one’s conscience) – particularly when the other side is disregarding your wants.
          * I don’t like the term ‘evil’ – it (literally) demonizes the other side; thinking about the wants of others humanizes them.

          Speaking from my side of The Pond: I see those American colonists as ungrateful and opportunistic (having been saved from those nasty Frenchies a few years previously) rather than as terrorists per se, not taking into account (i) the difficulties of governing a colony from 3000 miles / 3 weeks away and (ii) the time required for the mother country to shift politically towards a fairer relationship with its main colony. Probably a failure of both sides in not trying to see the problem from the others’ perspective. But you may not see it that way?

          • Wyrd Smythe says:

            “I had a look for your post on terrorism…”

            Judging by the dates involved (and what I wrote about “somewhat silly”), I probably had this one in mind:
            http://logosconcarne.com/2014/08/12/bb-43-anti-batman/

            It was an attempt to carry on with thoughts I’d expressed in this post:
            http://logosconcarne.com/2013/10/23/deflection-and-projection/

            And in the deliberately aggressive:
            http://logosconcarne.com/2013/10/17/republican-terrorism/

            I’m not proud of that last one… I keep trying different stances as a blogger, and that was during my “loud mouth asshole” phase. It seems to work for others, but I’m not sure I have the knack of being an engaging “loud mouth asshole.” (Plus, I was really angry with the American Republican party at the time.)

            I like your bullet points; well stated. Fair point about “evil.” I tend to equate it with “immoral” but you’re quite right about the demonizing effect. Something for me to keep in mind, as I do tend towards the hyperbolic.

            Thinking about it, one reason I equate the American colonists with terrorism is their use of sensible (albeit “unsporting”) war tactics. As I understand it (and it’s entirely possible I don’t), the British used a standard tactic in use for centuries: marching out on the battle field and having at it. That came from an era of sword fighting, knights, and so forth.

            The colonists tended to hide behind trees and in bushes — a tactic that gave a largely untrained army better leverage against a trained standing army.

            There is also the issue that the colonists were acting illegally under the rule of King George, and illegal actions seem more in the terrorist domain.

            Good point about failures on both sides. I’m not sure the two views could be reconciled. That often seems the case with governments and countries.

  4. Pingback: Deontology | Headbirths

  5. Pingback: A Unified Morality | Headbirths

  6. Pingback: Guilt and Shame | Headbirths

  7. Pingback: My Brain Made Me Do It | Headbirths

  8. Pingback: Some Good Reason | Headbirths

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s