What I Know and Why I Know It

Neuroscience and the philosophy of knowledge

Talk originally presented on 18 October 2013.



“When the facts change, I change my mind. What do you do, sir?”

The above quote (typically attributed to John Maynard Keynes but probably not originating from him) makes it sound so easy. So why don’t we all act in this way?

This talk looks at recent ideas about what the brain is actually doing, and relates this to what philosophers think about how we know things. It speculatively ties together the separate ideas of:

  • Neuroscience: Karl Friston’s ‘Variational Free Energy’ about what the brain is doing, involving a combination of (i) minimization of surprise through action and perception’ and (ii) hierarchical message passing.
  • Epistemology: Susan Haack’s ‘Foundherentism’, involving a combination of (i) foundational or correspondence theories of truth and (ii) the coherence theory of truth.
  • Philosophy of Science: Michael Polanyi’s ‘tacit knowledge’
  • Philosophy: Isaiah Berlins’ Psychological classification of individuals as either (i) ‘hedgehogs’ or (ii) ‘foxes’.

to present:

  • a pragmatic, physically-grounded theory of knowledge, and
  • an understanding of the difficulties we have in changing our minds..


Part I: Nature’s Secret Trick
1. Big / Small
2. Cortical Columns
3. Hierarchy
4. Adaptive / Predictive Hierarchical Models
5. Learning
6. A Unified Brain Theory
7. Epistemology
8. Foundherentism: A Better Theory
9. Neuroscience and Epistemology

Part II: Knowledge is Personal
10. Pragmatism
11. Impersonal Knowledge
12. From Objectivism to Relativism
13. Between Objectivism and Relativism
14. Michael Polanyi
15. It’s All Knowledge

Part III: Changing Your Mind
16. The Forest of Neurons
17. Anchoring
18. The Adaptive Toolbox
19. Hedgehogs and Foxes
20. Changing your Mind
21. Between White and White

Part I: Nature’s Secret Trick

1. Big / Small

We have a good understanding of the large-scale view of the brain and of the (very) small-scale view of the brain’s components, but we lack an understanding of how the two are related.

The large-scale view is understood through observing the behaviour of others (either as an everyday activity or through psychology) and, uniquely as a method, introspectively, of ourselves.

The small-scale view is understood through basic neuro-science – the functioning of neurons (and the glial cells), how they connect to one another via synapses so that when one neuron fires, other neurons that it is connected to may subsequently fire.

2. Cortical Columns

The small- and large-scale views are seemingly related through there being 16 billion-odd neurons in the cortex within the brain,  with thousands of neurons together forming cortical columns and millions of those columns forming the cortex – sheets of these columns that are then scrunched up to give the distinctive wrinkly appearance in order to fit within our skull.


Besides the general astonishment of such a ‘hypothesis’, it is also remarkable that the structure that gives rise to such varied functionality and behaviour is so uniform. Vernon Mountcastle famously observed this in ‘An Organizing Principle for Cerebral Function’ (1978; note from this and subsequent dates how recent much of this work is). There are phylogenetically older regions of the cortex that are not ‘neocortical’, with fewer that the 6 layers observed by Santiago Ramon y Cahal, such as the cingulate cortex. There are regions such as the Hippocampus in which there is a seemingly more chaotic organisation between grey matter (un-myelinated neurons) and white matter (myelinated neurons). There are regions in which the size of the cortical columns have a significantly of a different size that elsewhere (such as the large columns within the lower visual regions e.g. ‘V1’. But generally, the brain is remarkably uniform across regions traditionally with visual sensation, auditory sensation, motor control, higher level cognitive functions and so on.

3. Hierarchy


Above the level of the physical ‘building block’ that is the cortical column, we may discern a hierarchy of higher-level components (groups of many many columns) such as those famously mapped within the visual system of Macaque monkeys by Felleman and Van Essen (1991). At the bottom (marked ‘RGC’) are the retinal ganglion cells – the light-sensitive neurons with the retina of the eyes. These connect via the Lateral Geniculate Nuclei (‘LGN’) within the Thalamus up into the primary visual areas of the Cortex, such as ‘V1’. There are then many layers of groups of columns up eventually to the Hippocampus (‘HC’).  There is a hierarchical structure in that columns typically connect to columns either in the same layer, the layer above or the layer below but not further afield.

How could all this interconnectivity between groups of neurons produce the advanced behaviour of the whole?

4. Adaptive / Predictive Hierarchical Models

A Previous talk (‘Intelligence and The Brain’) introduced Karl Friston’s ‘Variational Free Energy’ theory (2005) of the what the brain might actually be doing in order to produce advanced behaviour from assembling billions of relatively simple components. To recap at an almost-absurdly simplified level:

  • There is a hierarchy (as noted above) from low (connected with the environment via senses and motor controls) to high.
  • Within each group of columns, a model is formed of the behaviour of everything below it in the hierarchy.
  • The behaviour of the group of columns is modified through a combination of action (motor output to lower levels) and adaptation (modification of the model in response to sense inputs to lower levels. Overall, the behaviour is said to be that of ‘minimizing surprise through action and perception’.
  • Downward-going signals are predictions. Upward-going signals are prediction errors.
  • When something happens in the outside world, prediction errors will propagate up the hierarchy. Predictions and errors will rattle around the many levels of hierarchy until behaviour settles. (In reality, something is always happening and prediction never settles.)

5. Learning

Within the above framework, learning is the process of going from being unable to predict to be able to predict. These may be equated with ‘not knowing’ and ‘knowing’ respectively. This process may be seen a (locally) counteracting entropy. In the previous talk (see ‘Entropy, Intelligence and Life’), this was explained with an example of the process of learning what the capital city of Australia is, with reference to boxes containing different densities of gasses, in order to make the connection to entropy. In its most abstract form, intelligence may therefore be seen akin to biological life.

In this talk, we do not need to concern ourselves with any connection to entropy. It does not matter what the underlying technology of our ‘groups of columns’ is – be it neurons, electronic transistors or whatever. In making this point, John Searle famously referred to using ‘beer cans and windmills’. So we could view those dots in boxes representing gas molecules in the different chambers alternatively as empty beer cans used as voting tokens. The group of columns is trying to move from a state of indecision/ignorance (beer cans distributed across many candidate boxes) to a state of knowledge (the majority of beer cans concentrated on one candidate).

6. A Unified Brain Theory

The above described behaviour can alternatively be presented as the simultaneous operation of two orthogonal processes – a ‘horizontal’one and a ‘vertical’ one.

The horizontal process is the ‘pushing-together’ (of beer cans) in order to formulate an opinion. In the absurdly simple capital-of-Australia example, the options are obviously mutually exclusive. In more realistic scenarios, it may not be that they are logically mutually exclusive. It may seem fairly arbitrary at this stage – what is important is that a single choice is being made and it is irrelevant whether it is the right choice or even vaguely a good choice.

The vertical process is the ‘pulling-together’ across the many levels of hierarchy. This helps ensure that the ‘opinion’ being formulated at any one level in the hierarchy fits in with other levels – and ultimately with the external environment.

Simultaneously, the horizontal and vertical processes push and pull towards forming good opinions, good models of the outside world which allows the brain to make predictions of the environment.

(The pushing and pulling of a network of rubber bands, pulleys and rods may be a better mechanical analogy than Searle’s ‘beer cans and windmills’.)

7. Epistemology

Epistemology is a major branch of Philosophy alongside Metaphysics, Ethics and Political philosophy and concerns problems surrounding knowledge and what it means to know something.

Within epistemology, knowledge has traditionally been defined as ‘Justified True Belief’ (and commonly credited to Plato). Knowledge is belief that is deemed to be true by virtue of some justification. There are three main theories of truth – differing in how the truth is justified:

  1. The correspondence theory of truth
  2. The coherence theory of truth
  3. The pragmatic theory of truth

The Correspondence theory of truth is the dominant theory and the most common-sensical. An example: I see a glass of water on the table in front of me. I claim to know that there is a glass of water on the table because:

  • I see the glass and it corresponds to my understanding of what a glass is inside my head.
  • I see the table and it corresponds to my understanding of what a table is inside my head.
  • I see the relationship between the glass and the table and it corresponds to my understanding of what ‘on’ is inside my head.

This seems so obvious, how could anyone believe otherwise? Well, the same argument could be applied to our observations of the Sun:

  • I see the Sun low down in the sky in the East in the early morning.
  • I see the Sun high up in the sky around midday.
  • I see the Sun low down in the sky in the West in the evening.

These observations all correspond to the idea in my head that the Sun is moving around the Earth. And yet, we do not believe that the Sun moves around the Earth but it is the other way around.

According to the Coherence theory of truth, we believe the Earth moves around the Sun and not the other way around because we have learnt that this idea coheres with other ideas in our heads, such as the idea that apples fall down towards the Earth. The separate ideas form a coherent ‘story’ of gravity.

But the Coherence theory is also problematic. Consider a situation in which someone is hynoptized into believing their left arm does not belong to them. They are then asked to empty a bottle of water into a glass and they proceed to do so by trying to take the lid off the bottle with their right fingers whilst awkwardly holding the bottle in their right hand. When asked why they are doing it like that, they reply that they slept awkwardly on their other arm last night and so are giving it a rest. In short, the confabulate a story: they try to build the most plausible coherent story they can from the circumstance they find themselves in. Yet their story clearly bears no relationship with what’s happening in the outside world, which is patently obvious to the hypnotist’s audience.

The Pragmatic theory of truth can be summarized by saying that truth ‘is what works’. But this sounds rather strange. I might believe that your nice house and nice car is actually mine, since it ‘works for me’. But this is expediency rather than truth. This is not what is meant by ‘works for me’ and I shall return to the Pragmatic theory later on.

8. A Better Theory

Before that, I want to look at improving on the Correspondence and Coherence theories.

The Foundationalist theory of truth is an attempt to improve on the Correspondence theory of truth by accepting that there are so-called ‘basic beliefs’ which have no justification of correspondence and are therefore axiomatic. These beliefs form the foundation for other beliefs which are justified by correspondence. I mention this theory not because it solved a previously mentioned problem of the Correspondence theory but just to note of its existence for what follows…

In her 1993 book ‘Evidence and Inquiry’, Susan Haack introduced the Foundherentist theory of truth which is an attempt to synthesize a better theory from the Foundationalist and Coherentist theories such that it takes the advantages of both without their drawbacks. Hence the rather unwieldy name. It is thus indirectly a synthesis of the Correspondence and Coherence theories of truth. Not surprisingly, the Foundherentist justifies knowledge by both correspondence to evidence and coherence with other knowledge. But the theory is best illustrated by a very good analogy created by Haack: the crossword puzzle.

In a crossword puzzle, we try to find answers to questions to fit into the grid. If we find that an answer fits in with an already completed question, we gain confidence that the answer is right. If it does not, doubt is then raise either about our proposed answer or the answers that conflict with it in the grid. In this analogy:

  • The answers to the clues are analogous to knowledge corresponding to evidence.
  • The grid provides the framework for building a coherent set of answers.

If we ignore the grid, we are left with just a quiz: a list of questions to which you are expected to provide answers where each question is quite independent of the others. This is analogous to the Correspondence theory of truth.

If we ignore the clues, we do what you might have done when you’ve got so far with a crossword and got stuck – try to fit any words into the remaining spaces in the grid. This is analogous to the Coherence theory of truth.

Note that Haack is also a Pragmatist. The Pragmatic theory of truth sees truth as an ongoing process of inquiry, building better predictive models of the environment. Her philosophy ths unites the 3 basic approaches to epistemology.

9. Neuroscience and Epistemology

You may now see where I’m going with this argument. I started off by presenting a physicalist description of how the brain works in terms of the simultaneous combined effects of:

  • The horizontal ‘pushing-together’ process in order to formulate an opinion, and
  • The vertical ‘pulling-together’ process across the many levels of hierarchy, linking predictive models to the outside environment.

and I finished with a philosophical theory of knowledge creation being an ongoing process of the combination of:

  • a (horizontal) coherence with other knowedge
  • a (vertical) correspondence with evidence

The two visions look rather similar! What has been presented is a physically grounded theory of knowledge – a ‘natural epistemology’.

(Furthermore, there is the similarity between the hierarchical nature of Foundationalism and the hierarchy in the brain.)

I seem to be going beyond the ‘cooperative naturalism’ position within ‘naturalized epistemology’– where science (and in particular, neuroscience) can help inform epistemology (‘cooperative naturalism’) such as when Haack says (‘Evidence and Inquiry’, p. 118):

“ … the results from the sciences of cognition may be relevant to, and may be legitimately used in the resolution of traditional epistemological problems”

…to an all-out ‘replacement naturalism’ which rejects the philosophical approach in favour of the scientific.

Re-iterating the quote given at the start (with bold emphasis added), Kant said (Critique of Pure Reason, 1781):

“The way that human minds arrange particulars is a skill so deeply hidden in the human soul that we shall hardly guess the secret trick that Nature here displays”

With progress in neuroscience following on from that in psychology, it can be argued that we will increasingly be in a position where we no longer need to guess. Nature’s secret trick is (gradually) being uncovered.

Part II: Knowledge is Personal

10.      Pragmatism

The philosophical movement called ‘Pragmatism’ arose in the latter half of the 19th Century in the United States, partly inspired by the progress in the sciences in the fields of psychology and evolutionary biology (i.e Darwin, 1859!). One of the founders, William James, was instrumental in establishing psychology as a new discipline. The other two main founders of the movement were C.S. Pierce and John Dewey.

I want to emphasize the similarity between this philosophy and Karl Friston’s ‘variational free energy’ theory of the brain (see previous posting for a summary):

  • The pragmatic view of knowledge. The purpose of knowledge is to predict.
  • The relationship between thought and action. The active relationship between agent and environment.

Those similarities are:

  • In Friston’s ‘variational free energy’ theory of the brain, what could be called knowledge is the associations/weighting embedded within the brain which permit predictions. The purpose of knowledge is to predict.
  • For pragmatists (and specifically ‘instrumentalists’; Dewey considered himself to be an ‘instrumentalist’), knowledge is viewed as a set of tools in the solution of problems encountered in the environment.
  • For pragmatists, meaning is an awareness of consequences before they occur (prediction!) and thinking is viewed as deferred action.
  • Friston’s ‘minimization of surprise through action and perception’ in which actions are performed to improve prediction which in turn improve actions.
  • Pragmatists view thinking is as deferred action. It is seen in terms of the consequences of having a thought rather than the origins of that thought. (As William James put it: ‘fruits not roots’.)

11.     Impersonal Knowledge

In constrast with pragmatism, the mainstream of philosophy has been interested in the roots of knowledge.

There is a long tradition of rationalism within philosophy that elevates the purity of mathematics and logic over the imperfect senses that are not to be trusted, in our attempts to find (or justify) the truth – an absolute, objective truth, free of subjective emotional bias.

Around the same time as pragmatism, another philosophical movement developed in response to the success of science. Positivism deemed that the sole source of knowledge is evidence, that other claims to knowledge such as introspection and intuition should be rejected in order to avoid subjectivity.

Then, in the 1920s-1930s, the Logical Positivists (such as those philosopher-scientists in the ‘Vienna Circle’) were inspired by developments in mathematical logic and sought to codify all knowledge into precise scientific language as a means of obtaining objective knowledge.

An overall view of science therefore emerges that gives the impression that, by following a logical method (involving the formulation of hypotheses and then testing them), we crank the handle of science as it were and churn out new theories formed from empirical observations and mathematical logic. Grammatical truths. New objective knowledge, untainted by bias. Absolutely true. Irrespective of persons.

But this was an ideal view of science.

12.      From Objectivism to Relativism

This view came under attack in the 1940s to 1960s. Philosophers then looked at history of science and compared this idealized view of how science works with how specific prominent discoveries had actually come about; Why had theory b superceded theory a and why theory c had not. They applied the scientific method to science itself and found it wanting. Most prominent of these philosophers were:

  • Thomas Kuhn, author of  “The Structure of Scientific Revolutions”, and
  • Paul Feyerabend, the author of “Against Method” who applied the phrase  ‘anything goes’ to science.

But there were other, lesser-known philosophers such as:

  • Michael Polanyi, author of “Science, Faith and Society” who coined the term ‘tacit knowledge’, and
  • Imre Lakatos, author of “Proofs and Refutations”, responsible for the idea of a ‘research programme’.

For some, this analysis of science was an intellectual response to the horrors of the consequences of totalitarianism in the central Europe they had left for British and American universities. It was a reaction against absolute knowledge.

Following this, in the 1970s, sociologists and anthropologists took over the practice of observing science from the philosophers. Often, science was given no privileged position and hence was to be understood as just one of many valid ways of understanding the world. This is relativism.

I have thus described a transition through the 20th Century from (absolutist) objectivism to a (subjective) relativism.  And those mid-century philosophers were appalled at how their arguments against a totalitarian science were now being used against all science. They denied relativism and still believed in scientific progress.

So, how then do we steer a sensible course between these objective and subjective extremes?

13.     Between Objectivism and Relativism

From the (tentative) explanation of what our brains are doing, previously presented, we understand the brain as building models:

  • I am building coherent models corresponding to the environment, from my experiences.
  • You are building coherent models corresponding to the environment, from your experiences. Therefore
  • We are (separately) building models of the same (shared) environment – but they are not the same.

It is not that our individual knowledge is an approximation of an external absolute truth; truth is a relationship between a knower and its environment. There cannot be knowledge without a brain.

And this is not a relativist position either. We are constrained by our (common) environment – we are not free to believe just whatever we want. Our brains are similarly constructed, in a similar environment so there will be a tendency for them to construct similar knowledge of the common world.

Knowledge is a product of both the environment and the brain. It is a (pragmatic) process, an evolving relationship between brain & environment. It is neither absolutism nor relativism but a middle course.

14.     Michael Polanyi

In a previous section, the philosopher of science Michael Polanyi (1891-1976) was mentioned, as was positivism. In his works ‘Science, Faith and Society’, ‘Personal knowledge’ and ‘The Tacit Dimension’, Michael Polanyi set out a view of scientific discovery contrary to positivism. Positivism considers evidence to be the sole source of knowledge and, in doing so, rejects intuition but for Polanyi, intuition is essential for scientific advancement.

Intuition is needed to ‘see’ a problem. At this point, the solution cannot be justified. A personal commitment is needed to persevere with the solution until it can be made explicit and justified. So, explicit justification (the evidence) only comes afterwards. Polanyi quotes St. Augustine:

“faith precedes reason”

Any knowledge that can be justified must be explicit – spoken or written. Intuition is part of another class of knowledge, implicit knowledge (or as Polanyi called it, ‘tacit’ knowledge’), which cannot be justified. Tacit knowledge should not be rejected as a form of knowledge. The explicit ‘Scientific Method’ cannot yield truth by itself. Whilst we consciously focus our attention on that we are making explicit, we are sub-consciously ‘looking’ across a wider range. With this acceptance of intuition, it has to be the case that:

“We believe more than we can prove”

“We know more than we can tell”

The claim that not all knowledge can be justified is in contrast to traditional philosophical views, for example:

  • Descartes applies the methodological scepticism of radical doubt: to doubt everything that cannot be proved to be (absolutely / objectively) true.
  • Popper’s method of falsification, if applied to the extent he would have liked,  would have prematurely rejected many theories that have subsequently been shown to be  very successful.

Tacit knowledge has been referred to as ‘know-how’ as opposed to the ‘know-what’ of explicit knowledge.

Knowledge = explicit + tacit = know-what + know-how

Tacit knowledge provides the foundation for explicit knowledge:

“All knowledge is either tacit or rooted in tacit knowledge.”

15.      It’s All Knowledge

The point I’m wanting to make in all this is to tie this particular philosophical understanding (Pragmatism and Polanyi’s philosophy of science) to our recent understanding of the physical brain. As noted in the last part (‘Nature’s Secret Trick’):

  1. There is a physical uniformity of the Cortex, as noted by Vernon Mountcastle, and
  2. Friston provides a generalized functional account of what is physically happening in the brain – irrespective of what regions are doing what.

I am not saying things like:

  • ‘Language is being processed in the ventro-medial abc cortex.’
  • ‘Abstract reasoning is being processed in the anterior xyz cortex.’

I am saying that all predictive adaptation across the entire cortex is embodying knowledge and that we cannot split the high-level, conscious, language-based knowledge off from the rest of what’s going on, and give it a privileged position.

It’s all knowledge!

There is hierarchy within the structure of the cortex, but the same thing is basically being done at all levels of the hierarchy, and the (often neglected) lower levels are playing no less an important role as those ‘high profile’ areas:

  • Polanyi’s  pre-propositional, tacit knowledge plays an important role in intuition/insight which should not be disregarded.
  • Tamar Gendler has introduced the concept of ‘alief’ – sub-conscious beliefs, particularly when they are in conflict with our conscious beliefs.
  • Proprioception can be seen as a form of knowledge (knowledge of one’s own body) and associated with Howard Gardner’s ‘kinaesthetic intelligence’.

It leads us to what some would regard as preposterous claims such as:

Just because we cannot justify something, it doesn’t mean we do not know it!

This is a far cry from the established view of knowledge as ‘justified true belief’. We seems to be saying:

  • Justified? Not necessarily!
  • True?: Truth arises from an engagement between a subject and its environment. There is no knowledge that is objective, but we should not dismiss it as being just subjective either. It is personal.
  • Belief?: All knowledge/belief is embodied within the connectome.

A response to this might then be:

What you are describing is not knowledge at all, but belief.

But I am not interested in the word-play here. As A. J. Ayer said in ‘The Problem of Knowledge’, the problem of knowledge…

“…is to state  and assess grounds on which … claims to knowledge are made… It is a relatively unimportant question what titles you then bestow on them.”

Different cultures have drawn a distinction between ‘knowledge’ and ‘belief’  in different ways. For example, two alternatives to the ‘explicit justification’ argument are:

  • Belief is personal; knowledge is social. Knowledge is socially agreed.
  • Knowledge is first-hand, belief is second-hand. Knowledge is drawn from personal experiences whereas beliefs are less reliably based on others’ experiences.

To me, there is no sharp distinction between them. Instead there is a knowledge/belief continuum. At one end of the scale lies knowledge at which we are extremely confident we can act on the basis of what that knowledge predicts. There is a high degree of certainty. At the other end of the scale lie beliefs which we are prepared to act upon but with little confidence of their certainty. But we do not have an easy way of determining where on that scale a particular belief belongs, only a number of heuristics to guide us. The more the heuristics point towards certainty, the more we will classify a belief as knowledge. In their simplest forms:

  • It is knowledge because I have experienced it directly with my own senses.
  • It is knowledge because it fits in with what else I know.
  • It is knowledge because it has been verified by others.

There is no simple distinction.

Part III: Changing Your Mind

16. The Forest of Neurons

Two major traditional concepts in the philosophy of mind are John Locke’s ‘tabula rasa’ and Immanuel Kant’s ‘synthetic a priori’ knowledge. The ‘Tabula Rasa’ concept falls within the British empiricist tradition and is that people are born with their mind as a ‘blank slate’ (the Latin term literally means a ‘scraped tablet’). Hence all knowledge is formed from experience. The ‘synthetic a priori’ concept falls within the Continental rationalist tradition and contradicts this. It maintains that there is some knowledge that exists before (‘a priori’) any experience – such as knowledge of time and space, which provides a framework into which other knowledge can fit.

These days, it is fairly common to hear news reports on developments in brain science, such as the large-scale Human Connectome Project that is ‘mapping’ the ‘wiring’ in our brains, and of the even larger Human Brain Project to simulate it. Even more common are reports that such-and-such a region of the brain has been associated with such-and-such a behaviour (invariably discovered from fMRI scans).

The impression that is given in these news reports is that there is a ‘wiring diagram’ of our brains and this wiring determines how different parts of the brain do different things and communicate with other parts (obviously there is some variation between you and me). This wiring diagram is presumably derived somehow from our genes. This seems to support the idea that our brains are somehow ‘pre-wired’ which fits into the Kantian idea.

But in the talk so far, I have presented a simple view of how the brain works in which I have emphasized the uniformity of the cortex – the basic structure of neurons within cortical columns is remarkably similar across the entire cortex and how each part of the brain is operating is also uniform. This seems to fit better with the Lockean idea that we are not ‘pre-wired’.

So which is it? Pre-wired or not?

Obviously, there’s a third option: neither! We normally associate ‘wiring diagrams’ with things like electronic circuits – hardware. But the brain is obviously a living organism. The brain grows. The question posed is akin to ‘which came first, the chicken or the egg?’.  This third way is neither ‘chicken’ nor ‘egg’, but it makes the idea that the answer could have been either ‘pre-wired’ or ‘not pre-wired’, indeed the very question, look slightly silly.

I want to make an analogy between neurons in the brain and other living entities – trees in a forest:

  • A forest is a collection of a large number of trees, spread out over a large area.
  • A cerebral hemisphere is a collection of a large number of neurons, spread across a ‘cortical sheet’ (it could be cut so that it could be flattened out to form a sheet).

(There are about 8 billion neurons in one hemisphere of the human cortex. It has been estimated that there are about 400 billion trees in the Amazonian rainforest.)

This analogy is mainly to help illustrate how different parts of the cortex can get built differently:

  1. with the same basic building blocks throughout the cortex,
  2. not by design (without a wiring diagram), and
  3. by factors outside of the cortex.

Obviously, there are limitations to the analogy:

  • The cortical sheet has multiple layers of neurons (e.g. 6 in the neocortical regions) whereas there is only one layer in the forest.
  • Neurons communicate directly with one another via synapses. Recall the common neuroscience maxim: ‘neurons that fire together wire together’, referring to the strengthening of synapses, i.e. the more both the post-synaptic neuron and pre-synaptic neurons are firing simultaneously, the easier it becomes for the post-synaptic neuron to fire when the pre-synaptic neuron fires. So how they change throughout their lifetime is strongly influenced by their neighbouring neurons. Trees on the other hand, obviously  just grow and are only influenced by their neighbours in competition for light and nutrients.

The forest will have a variation in tree species across its area. There is no ‘wiring diagram’ to specify which trees should go where.  As well as there being many internal factors, there are many factors external to those trees which determine which trees will tend to grow where, determined both by the underlying landscape on which the forest sits and the climate above, for example:

  • The ability to grow on steep slopes.
  • The effect of the Sun: North- versus South- facing slopes.
  • The effect of the Rain: Windward- versus Leeward slopes.
  • The soil and other properties of the soil.

Similarly, within the brain, there are many factors that affect the cortical sheet which are external to it:

  • the particular way the cortical sheet is folded up to fit within the skull.
  • the mix of chemicals to grow the neurons in the first place, and the cocktail of neurotransmitters produced in the lower parts of the brain.
  • the type of stimulus received from senses and the environment with which the brain is interacting.

To continue the analogy, imagine a forest after a fire has ravaged it. The surviving plants are free to grow and they will flourish, unimpeded by competition. In contrast, in an overgrown forest of gnarled old plants, it is difficult for any new growth because of the existing roots and foliage soaking up so much of the resources. This is analogous to:

  • the young brain being very plastic: neurons in the young brain being able to create new synapses at a prolific rate.
  • the difficulty of the elderly to learn new things or ‘unlearn’ i.e. form contrary habits  – it being difficult for neurons in an old brain to form new connections or ‘undo’ existing ones because of the mass of established synapses.

Note that a forest fire can rejuvenate a forest, whereas brains are periodically regenerated within new bodies that are culturally re-educated afresh. Perhaps the correct comparison for the brain is not a ‘scraped tablet’ but a ‘scorched earth’.

(Photo credit: Braden Piper Photography, Carlsbad Caverns NM)

17. Anchoring

In the ‘Forest of Neurons’ analogy in the previous section, young brains are able to acquire new knowledge compared with their elders because connections are unimpeded by existing growth (or, in the abstract sense, existing knowledge). This makes the young brain agile but impressionable compared with a mature brain that can be wise yet stubborn.

And this did not take into account the fact that the number of synapses varies greatly during development, accentuating this difference. A neonate’s brain is estimated to have an average of 2,500 synapses per cortical neuron. This number then varies throughout development, eventually settling at around 8,000 synapses per neuron but having been at twice this number at some stages in between.

This difference leads to a ‘cognitive bias’ – that information presented to us at an early age is weighted more than information presented in later life, and this then skews our judgement.

But the same skewing happens over a much shorter timescale as well and was called ‘anchoring and adjustment heuristic’ by Amos Tversky and Daniel Kahneman in the 1974 paper ‘Judgement under Uncertainty: Heuristics and Biases’. A classic example they gave of this was an experiment in which they gave two groups of students 5 seconds to estimate the result of a numerical expression.

  • One group were given the expression 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 and they produced a median result of 512.
  • The other group were given the expression 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 and they produced a median result of 2,250.

The actual answer of course, is the same for both groups (it is 8! = 40,320) but the groups estimates were skewed towards the information presented to them first.

This skewing in cognitive psychology is also present within the bio-inspired Artificial Neural Networks, where ordering of the information in the training set can substantially alter the behaviour.

18. The Adaptive Toolbox

Kahne and Tversky’s notion of ‘cognitive bias’ implies that human thought is deficient compared with a rule-based formal logic.  The psychologist Gerd Gigerenzer has been a fierce critic of this view. Instead, it should be seen that humans use an ‘adaptive toolbox’ – a repertoire of ‘rules of thumb’ – to make good-enough decisions about the world with limited time, effort and information. Computer simulations have shown that these heuristics can outperform traditional optimization methods.

An example of a ‘rule of thumb’ is the ‘recognition heuristic’. In ‘Models of ecological rationality: the recognition heuristic’ (2002), Goldstein and Gigerenzer tested students on their knowledge of the population of cities. When presented with the names of 2 cities, students needed to state which city had the greater population (it’s like facemash for Geography Bees). Surprisingly, American students scored higher on German cities and vice versa. Before saying which city had the greater population, the respondents needed to state which of the 2 they recognized. This allowed the experimenters to identify the questions in which only one of the cities were recognized. Goldstein and Gigerenzer surmised that the students were employing a ‘recognition heuristic’ – presume the city you recognize has the larger population, because you recognize it. This extremely simple strategy worked very well. Here is a case of ‘less is more’ – being less knowledgeable about the foreign cities actually helped. It may be using less knowledge but it is using the information available more intelligently. As Goldstein and Gigerenzer say: ‘missing knowledge can be used to make intelligent inferences’. The heuristic is valuable because it it ‘fast and frugal’ – it doesn’t take much effort to come up with a reasonably-good result.

This heuristic has been used to good effect on predicting the outcome of Wimbledon tennis matches – where it compares well against the official ATP ranking system and the seeding of players by so-called experts.

Gerd Gigerenzer, former banjo player.

Kahneman and Gigerenzer’s arguments may be seen as complementary rather than just contradictory. Kahneman position is consistent with the view that our thinking falls short of logic required in the 21st Century world (particularly in economics), whereas Gigerenzer position is consistent with the view that our thinking has evolved efficiently to cope with surviving in the Holocene.

19. Hedgehogs and Foxes

So, amateurs can outperform experts. And computers can too. Philip Tetlock’s 2005 book “Expert political judgment: How good is it? How can we know?” describes how, over a 20-year period, he analysed the predictions of a large number of political experts and then compared these with how things actually turned out. The experts fared poorly: worse than some fairly straightforward statistical computer algorithms. Among the many ways he looked at trying to demarcate his group of experts in order to see what could be done to improve their efforts, Tetlock split them into ‘hedgehogs’ and ‘foxes’, following Isaiah Berlin’s metaphor of ‘The Hedgehog and the Fox’ (1953). This is based on a text fragment attributed to the ancient Greek poet Archilochus (650BCE):

The fox knows many things, but the hedgehog knows one big thing.

Tetlock found that the ‘fox’ personality type was a better predictor than the ‘hedgehog’ type. (See also “Why Foxes Are Better Forecasters Than Hedgehogs”, 2007)

In the first part of the talk, I linked neuroscience to epistemology by equating a physicalist description of how the brain works in terms of the simultaneous combined effects of:

  • The horizontal ‘pushing-together’ process in order to formulate an opinion, and
  • The vertical ‘pulling-together’ process across the many levels of hierarchy, linking predictive models to the outside environment.

with the ‘foundherentist’ position within epistemology, that being an ongoing process of the combination of:

  • a (horizontal) coherence with other knowledge
  • a (vertical) correspondence with evidence

It is the simultaneous horizontal/vertical pushing/pulling that builds up a coherent view that corresponds to the external environment.

It’s a balancing act. If we steer too much towards coherence, we risk formulating ideologies that bear no resemblance to reality. If we steer too much towards correspondence, we risk our knowledge being a jumbled hypocritical mess. This looks rather like the distinction between ‘hedgehogs’ and ‘foxes’. The ‘hedgehogs’ rely too much on coherence. I can’t go as far as saying that ‘foxes’ rely too much on correspondence, only that they are less biased towards coherent ideologies and this allows them to make (slightly) better predictions.

20. Changing your Mind

So from the above, ‘Hedgehog’ personality types can be viewed as valuing increasing coherence at the expense of correspondence. The world of Academia can be seen as an ‘array’ of hedgehogs (this being the collective noun for them) as, frequently, academic originality is to be preferred over truth. Note that this individual disregard for truth may actually be a good thing. Whilst a solitary ‘fox’ might outperform a single ‘hedgehog’, this might well not scale to groups. An array of ‘hedgehogs’ is likely to throw up a more diverse range of ideas to explore than a ‘skulk’ of ‘foxes’ will, which will serve the group better.

There is something within us that wants to pull our ideas together to create some greater understanding, some greater ‘truth’. And in doing so, it will distance itself from the views of others that have had different experiences, particularly in their ‘formative years’.

In an epistemological as well as biological sense, we are living beings. Knowledge within us grows. But this necessarily entails long-term cognitive biases. And cognitive biases are apparent even in short time scales (‘anchoring’). When we are presented with new information that does not fit with our previous experience, we will tend to reject it. It is incommensurate. And this is exacerbated by the reduced plasticity of the brain as we age. As individuals, we have limits on how much we can change our mind.

We have to accept that we are far from perfectly rational beings (in an idealized/mathematical/logical sense). Why should we be? If we accept we are evolved, we must accept that we only need to have been effective (and for humans, we need only be effective in groups). Each of us is a walking, talking toolbox of pragmatic, cognitive tricks, with many of them acting below our consciousness. We were not designed to be a homo economicus; we evolved to be able to react appropriately to immediate circumstances.

There is the well-known quote, normally attributed to John Maynard Keynes but almost certainly not originating from him:

“When the facts change, I change my mind. What do you do, sir?”

This sounds so reasonable! Why would we do otherwise? Of course, the difficulty is in accepting that the facts have changed; that the new facts are valid.

21. Between White and White

A better quotation is called for, to reflect the real problem.

The tragedy of human communication is that what I say is not what you hear.

What I say has meaning bound up in both the explicit words of what I say and the tacit knowledge on which it is founded. The words you hear get interpreted in terms of the tacit knowledge on which they are founded in your head.

Your received meaning is not the same as my transmitted meaning.

I say:


in order to convey….

meaningme = explicit + tacitme

And you hear:


…and your interpretation of what I say:

meaningyou = explicit + tacityou

But because

tacitme ≠ tacityou


meaningme ≠ meaningyou

This is different from the engineered applications of Shannon’s Communications Theory such as mobile phone or internet communications, where there is no ambiguity in the meaning of the message and hence no difference between the meaning transmitted and the meaning received. And the ambiguity in human communication isn’t just because we are frequently too brief in the number of words we use which acts against helping to ensure that the communicated message has been received correctly.

(This reminds me of the networking geek joke…“A TCP/IP packet walks in to a bar and says “I want a beer”. The barman says “you want a beer?” and TCP/IP packet says “yes, a beer”.)

This human tragedy is best summarized by Jacques Ranciere (in ‘Disagreement: Politics and Philosophy’, 1998):

“Disagreement is not the conflict between one who says white and another who says black. It is the conflict between one who says white and another who also says white but does not understand the same thing by it.”

The End

Postscript 1: Two Hermann Hesse quotes from ‘Siddharta’ (obviously, my highlights)…

Wisdom cannot be imparted. Wisdom that a wise man attempts to impart always sounds like foolishness to someone else … Knowledge can be communicated, but not wisdom. One can find it, live it, do wonders through it, but one cannot communicate and teach it.”

Words do not express thoughts very well. they always become a little different immediately they are expressed, a little distorted, a little foolish. And yet it also pleases me and seems right that what is of value and wisdom to one man seems nonsense to another.”

8 Responses to What I Know and Why I Know It

  1. Pingback: Talk: What I Know and Why I Know It | Headbirths

  2. Pingback: Nature’s Secret Trick | Headbirths

  3. Pingback: Knowledge is Personal | Headbirths

  4. Pingback: The Forest of Neurons | Headbirths

  5. Pingback: Changing Your Mind | Headbirths

  6. Pingback: The Science Delusion Part 2 | Headbirths

  7. Pingback: Rules, Hierarchy and Prediction | Headbirths

  8. Pingback: Some Good Reason | Headbirths

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s