This is the second in a series of occasional postings digger deeper into the brain’s structure and functioning. Previously, there was a large scale view looking at the whole of the cortex. Here now is a view at an extremely small scale view – at ion channels. There are over 10,000 ion channels in a single neuron, meaning there are over 1,000,000,000,000,000 (1 quadrillion) ion channels in a human brain – and this may be underestimating by an order of magnitude or more.
Ion Channels and the Membrane Potential
A neuron is a biological cell, albeit a rather elaborate one, and so has a membrane like any other cell which separates what is inside the cell from what is outside. Bridging the membrane are proteins that function as ion channels, allowing ions to pass into or out of the cell. The ions that generally play a role are Calcium (Ca2+), Sodium (Na+), Potassium (K+) and Chloride (Cl–) and particular proteins only allow particular ions to pass through them. For the purposes of the explanation here, there are 3 types of ion channel:
- Voltage-gated ion-channels (VGICs): these open up and close at particular membrane potentials. The membrane potential is the local voltage across the membrane.
- Ligand-gated ion channels (LGICs): (Ligand = ‘binding’) opened or closed in response to the binding of another particular protein called a neurotransmitter (for reasons which will become obvious later). Consequently, the LGIC is also called a receptor.
- Leak channels: Permanently open channels.
Furthermore, there is the sodium-potassium pump. When 3 Sodium ions have binded onto it on the inside, the pump will eject them outside. It will then be able to collect 2 Potassium ions from the outside and bring them inside the cell. It will then be back where it started, able to repeat this cycle again.
Together with Potassium leak channels, the pump creates the membrane potential. It swaps Sodium for Potassium but then leak channels allows those ions to escape. This creates a surplus of positively-charged ions on the outside, so the membrane potential (measured with respect to outside of the cell) is negative. It’s normal value settles at about -70mV when nothing else is happening – this is its resting potential.
The membrane potential affects the behaviour of VGICs which changes the chemical balance of the neuron which thereby affects the membrane potential. This feedback allows the creation of complex behaviour on the part of a single neuron. Furthermore, the mix of ions outside of the cell and the operation of LGICs permits communication between neurons. The behaviour of the ion channels provides the basis for understanding the electro-chemical functioning of neurons (in a later blog posting) – the interplay between the chemical (ions) and the electrical (membrane potentials).
Brains vs Computers: Science vs Engineering
Chapter 3 (‘Pulse, Impulse’) of Susan Greenfield’s ‘The Human Brain: A Guided Tour’ is a good popular introduction to the workings, particularly the chemical workings, of a neuron. But in this blog posting and some that follow, I hope to provide an introductory overview of the workings of the neuron that is both more concise and more complete. And I take issue with her on one particular section…
Within just 4 paragraphs of the chapter, she makes a series of comments that I find astonishing that she should make:
- “It is this chemical-specific feature of brain function than, in my view at least, makes the brain particularly daunting for those who attempt to model it with computers.”
- “This molecular symphony can hardly be regarded as comparable to the scenario inside the computer. First and foremost obviously, the brain is fundamentally a chemical system – even the electricity it generates comes from chemicals. ”
- “These events do not have a direct or any easy analogy with a computer.”
- “…the chemical composition of the neurons themselves is changing, and hence there is no separate and unchanging hardware, in contrast to a programmable range of software.”
- “…computers can ‘learn,’ but few are changing all the time to give novel responses to the same commands.”
- “The brain does not necessarily operate according to algorithms: What would be the rule for common sense, for example?”
- “Computers can do some of the things that brains can do, but this does not prove that the two entities operate in a similar way or serve a similar purpose.”
I agree with many of those sentences but, taken as a whole, they seems to be saying that there is something fundamentally different between neural systems in brains and electronic systems in computers. There seems to be an underlying confusion between science and engineering:
- Computers such as found in PCs/laptops/mobile phones were not designed to be like brains. And ‘computers’ of a different sort, artificial neural networks, are an example of bio-inspired engineering – looking at how nature does things and doing things in a similar way. That doesn’t mean that an ‘artificial neuron’ should behave in the same way as a real neuron any more than an artificial flower should behave in the same way as a real flower.
- Modelling neurons in computers is an example of engineering-enabled science. It is not at all wet inside computers but that doesn’t stop them simulating weather systems.
The issue is: “what is an appropriate model of a neuron to use”? An artificial neuron is not an appropriate model but it was never intended to me – its use is elsewhere. NB: The issue of an appropriate model was presented in the talk “Could Androids Dream of Electric Sheep” – in the context of consciousness. Here we are only discussing it in terms of behaviour (correspondence of the behaviour of the model to the observed behaviour of real neurons).
If we want to model more of the precise behaviour of a neuron, we just need a better model. If we wanted to, we could model at the level of individual atoms and we would then capture all of the chemistry involved. It would not matter that the computer running the simulations with that model is not itself a chemical system. And it would be running a fixed algorithm. It does not matter that there is no direct or any easy analogy with a computer.
The problem of such a simulation is that the technology we have to run it would be interminably slow. We have practical problems of running simulations of biological neuron systems with much higher level models of neurons with only thousands of neurons, let alone the 85-ish billion of the human brain.
And whilst for the neuron there is no direct or any easy analogy with a computer:
- I am struck by how computer-like the behaviour of some biological components are, – take the Sodium-Potassium pump as an example. (School biology was a soft science and I am latterly discovering how ‘hard’ a science many branches of biology actually are.)
- At a lower level, it is possible to make an easy analogy – between an ion channel and a transistor.
Transistor Models of Ion Channels
Regarding the latter point, in an interesting paper in Neural Computation, 2007, Kai Hynna and Kwabena Boahen design an 8-transistor circuit model of a voltage-gated ion channel.
Functionally, both an ion channel and a transistor are similar. In both, a voltage controls how much current is passed through a channel. But, Carver Mead (the ‘father of analog neural nets’ and, before that, one half of the Carver and Mead of VLSI Systems) recognised that the thermodynamic physics underlying both are very similar (hence their differential equation models of behaviour are also similar). The similarities in the underlying physics allow transistors to be used as thermodynamic isomorphs of ion channels.
Just as MOS transistors are available in NMOS and PMOS flavours, voltage-gating ion channels can be activating or inactivating. In an activating ion channel, a higher voltage will open the channel. The 8-transistor circuit for such a channel is shown below. V is the membrane potential. VO is the opening voltage and VC is the closing voltage. IT is the ion current through the channel. The circuit for an inactivating channel is the same but with VO and VC swapped around.
This is an example of engineering enabling science: the technology we have available now provides us with the ability to build silicon chips with literally billions of transistors on them. It is hoped it could be used to simulate the behaviour of many millions of neurons in a timely manner, far faster than conventional computers could. It is also an example of science enabling engineering – ‘neuromorphic engineering’: building artificial neural networks with more neuron-like artificial neurons that may enable us to build better image sensors or cochlear implants, for example.