Firing and Wiring

Brains essentially are ‘just a bunch of neurons’  which are connected to one another by synapses. A neuron will ‘fire’ when there is enough activity (firing) on its synapses. The network learns by modifying the strengths of those synapses. When both sides of a synapse are active around the same time, the synapse will be strengthened. When they are out of sync, the synapse will weaken.

This is summarized by Donald Hebb’s  famous slogan:

‘neurons that fire together, wire together’

often continued as

‘and out of sync, fail to link.’

Artificial Neural Nets are inspired by the real Neural Nets that are our brains. Hopfield Networks were an early form of artificial neural network – one in which

‘neurons that fire together, wire together’

is the central concept.

Here I provide some Python code to demonstrate Hopfield Networks.

Unapologetically Unpythonic

As noted elsewhere, the code here is very ‘unpythonic’. It does not use library functions and vectorizing to make the code efficient and compact. It is written as a C programmer learning Python might write it, which highlights the underlying arithmetic operations and complexity within the nested for loops. Conversion to efficient Python code is ‘left as an exercise for the reader’.

Alternatively, you could just look at ‘code-affectionate’s posting that I gratefully acknowledge, which similarly introduces Hopfield Networks but with pythonic code.

An Online Python Interpreter

Another beginner’s approach to Python is to use an online interpreter rather than downloading and installing one.

Open https://repl.it/languages/python3 in a new window…

https://repl.it/languages/python3

The white region on the left hand side of the page is the ‘editor’ region where code can be written then run (click on ‘run’) with the output appearing in the ‘console’ region (black background) on the right hand side. Alternatively, code can be written directly into the console.

Running the ‘editor’ program resets everything in the console; any objects previously defined will be forgotten. So, where I introduce code below, it is easiest if you just copy and paste it at the end of the ‘editor’ code and then re-run the whole lot.

This interpreter is then a sandbox for you to play around in. You can make changes to the code or enter different commands into the console and see what happens.

MICR Application

We are going to train a tiny Hopfield network to recognize the digits 0…9 from an array of pixels where there is some noise affecting some of the pixels. This is like MICR (magnetic ink character recognition) where human-readable digits printed in magnetic ink on cheques (bank checks) were stylized such that they were also machine-readable.

E13B MICR font digits


The E13B MICR font digits for MICR (Magnetic Ink Character Recognition)

But here, to keep things simple, the character set is just built on a tiny 4 x 5 pixel array…

MICR-like characters in a tiny (4 x 5) array


MICR-like characters in a tiny (4 x 5) array

… and the resulting 20-neuron network will have a paltry learning ability which will demonstrate the limitations of Hopfield networks.

Here goes…

The digits are defined in Python as…

Num = {} # There's going to be an array of 10 digits

Num[0] = """
XXXX
XX.X
XX.X
XX.X
XXXX
"""

Num[1] = """
XX..
.X..
.X..
XXXX
XXXX
"""

Num[2] = """
XXXX
...X
XXXX
X...
XXXX
"""

Num[3] = """
XXX.
..X.
XXXX
..XX
XXXX
"""

Num[4] = """
X...
X...
X.XX
XXXX
..XX
"""

Num[5] = """
XXXX
X...
XXXX
...X
XXXX
"""

Num[6] = """
XX..
X...
XXXX
X..X
XXXX
"""

Num[7] = """
XXXX
..XX
.XX.
.XX.
.XX.
"""

Num[8] = """
XXXX
X..X
XXXX
X..X
XXXX
"""

Num[9] = """
XXXX
X..X
XXXX
..XX
..XX
"""

A function is used to convert those (easily human-discernable) 4 x 5 arrays into a 20-element list of plus and minus ones for the internal processing of the Hopfield network algorithm. (This pythonic code has been copied from ‘code-affectionate’)

import numpy
def Input_Pattern(pattern):
    return numpy.array([+1 if c=='X' else -1 for c in pattern.replace('\n','')])

digit = {}
for i in range(0, 10):
    digit[i]     = Input_Pattern(Num[i])

Typing ‘digit[1]’ into the console will show you how a ‘1’ is represented internally.

Another function converts that internal representation into a 20-bit number just for reporting purposes…

def State_Num(pattern):
    state_num = 0
    for x in range(0,20):
         if pattern[x]==1:
            state_num += (1 << x)          #print("x = %d; bit = %d; s = %d" % (x, pattern[x], state_num))     return state_num state_num = {} for i in range(0, 10):     state_num[i] = State_Num(digit[i])     print("Digit %2d state number 0x%x" % (i, state_num[i])) 

We are going to add random errors to the digits and see how well the network corrects them. That is, whether the network recognizes them as being one of the 10 particular digits upon which it has been trained.

 import copy import random def Add_Noise(pattern, num_errors):     # (We need to explicitly 'copy' because Python arrays are 'mutable'...)     noisy = copy.deepcopy(pattern)     if num_errors > 0:
        for i in range(0, num_errors):
            pixel = random.randint(0, 19) # Choose a pixel to twiddle
            noisy[pixel] = -noisy[pixel] # Change a -1 to +1 or vice versa
    return noisy
    # Note: It can choose the same pixel to twiddle more than once
    #       so the number of pixels changed may actually be less

And to help see what is going on, we are going to have a function to display patterns…

def Output_Pattern(pattern):
"""
Display a 4x5 digit array.
"""
for x in range(0,20):
if pattern[x]==1:
print("●", end="")
else:
print(" ", end="")
if x % 4 == 3 :
print("")
print("")

Putting these components together, we can see noisy patterns that we will use to test our Hopfield network…

for i in range(0, 10):
print("n = %d; s = 0x%5x" % (i, state_num[i]))
Output_Pattern(digit[i])</code>

print("A noisy digit 1 with 3 errors...")
Output_Pattern(Add_Noise(digit[1], 3))

Now onto the main event.

We have a 20-neuron network (just one neuron per pixel) and we train it with some digits. Each neuron is (‘synaptically’) connected to every other neuron with a weight.

At the presentation of each number, we just apply the Hebbian rule: we strengthen the weights between neurons that are simultaneously ‘on’ or simultaneously ‘off’ and weaken the weights when this is not true.

def Train_Net(training_size=10):
    weights = numpy.zeros((20,20)) # declare array. 20 pixels in a digit
    for i in range(training_size):
        for x in range(20): # Source neuron
            for y in range(20): # Destination neuron
                if x==y:
                    # Ignore the case where neuron x is going back to itself
                    weights[x,y] = 0
                else:
                    # Hebb's slogan: 'neurons that fire together wire together'.
                    weights[x,y] += (digit[i][x]*digit[i][y])/training_size
                    # Where 2 different neurons are the same (sign), increase the weight.
                    # Where 2 different neurons are different (sign), decrease the weight.
                    # The weight adjustment is averaged over all the training cases.
    return weights

training_size = 3 # just train on the digits 0, 1 and 2 initially
weights = Train_Net(training_size)

Whereas training was trivially simple, to ‘recall’ a stored ‘memory’ requires more effort. We inject an input pattern into the network and let it rattle around inside the network (updating due to the synchronous firing on neurons and dependent on the weights of the synapses between those neurons) until it has settled down…

def Recall_Net(weights, state, verbosity=0):
    for step in range(25): # 25 iterations before giving up
        prev_state_num = State_Num(state) # record to detect if changed later

        new_state = numpy.zeros(20) # temporary container for updated weights
        for neuron in range(0,20): # For each neuron
            # Add up the weighted inputs from all the other neurons
            for synapse in range(0, 20):
                # (When i=j the weight is zero, so this doesn't affect the result)
                new_state[neuron] += weights[neuron,synapse] * state[synapse]
        # Limit neuron states to either +1 or -1
        for neuron in range(0,20):
            if new_state[neuron] < 0:                 state[neuron] = -1             else:                 state[neuron] = 1         if verbosity >= 1:
            print("Recall_Net: step %d; state number 0x%5x" % (step, State_Num(state)))
        if verbosity >= 2:
            Output_Pattern(state)
        if State_Num(state) == prev_state_num: # no longer changing
            return state # finish early
    if verbosity >= 1:
        print("Recall_Net: non-convergence")
    return state

We now test this recall operation …

print("Recalling an error-free '1'...")
Recall_Net(weights, digit[1], verbosity=2)

And we now test this recalling when there is some added noise. In this example, the noise is added deterministically rather than randomly so that you can get the same results as me.

I use a ‘1’ digital but set all the pixels on the top row to +1…

●●●●
 ●
 ●
●●●●
●●●●

…and this does the recall of this character…

print("Recalling a '1' with errors...")
noisy_digit = Add_Noise(digit[1], 0)
noisy_digit[1]=1
noisy_digit[2]=1
noisy_digit[3]=1
Output_Pattern(noisy_digit)
Recall_Net(weights, noisy_digit, verbosity=2)

This shows the state of the network over successive iterations, until it has settled into a stable state.

Recall_Net2: step 0; state number 0xfbba3
●●
 ● ●
●● ●
●● ●
●●●●

Recall_Net2: step 1; state number 0xfbbaf
●●●●
 ● ●
●● ●
●● ●
●●●●

Recall_Net2: step 2; state number 0xfbbbf
●●●●
●● ●
●● ●
●● ●
●●●●

Recall_Net2: step 3; state number 0xfbbbf
●●●●
●● ●
●● ●
●● ●
●●●●

Unfortunately, it is the wrong stable state!

As an example of how this recall function can be expressed more pythonically

def Recall_Net_Pythonically(weights, patterns, steps=5):
    from numpy import vectorize, dot
    sgn = vectorize(lambda x: -1 if x<0 else +1)
    for _ in xrange(steps):
        patterns = sgn(dot(patterns,weights))
    return patterns

(This is not quite a fair comparison as it cannot output any debug information, controlled by the ‘verbosity’ flag.)

Wrapping training and recall into an ‘evaluation’ function allows us to test the network more easily…

def Evaluate_Net(training_size, errors, verbosity=0):
    # Training...
    weights = Train_Net(training_size)
    # Usage...
    successes = 0
    print("Tsize = %2d   "  % training_size, end="")
    print("   Error pixels = %2d    " % errors, end="")
    for i in range(training_size):
        noisy_digit = Add_Noise(digit[i], errors)
        recalled_digit = Recall_Net(weights, Add_Noise(digit[i], errors), verbosity)
        if State_Num(digit[i]) == State_Num(recalled_digit):
            successes += 1
            if verbosity == 0: print("Y", end="")
            else: print(" Correct recall")
        else:
            if verbosity == 0: print("N", end="")
            else: print(" Bad recall")
    print("   Success = %.1f%%" % (100.0*successes/training_size))

Training the network with 3 numbers with 1 bad pixel or without any bad pixels works OK…

print("Training 3 digits with no pixel errors")
Evaluate_Net(3, 0, verbosity=0)
print("Training 3 digits with just 1 pixel in error")
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)
Evaluate_Net(3, 1, verbosity=0)

… whereas trying with 2, 3 or 4 errors only works some of the time…

print("Training 3 digits with 2 pixels in error")
print("Training 3 digits with 2 pixels in error")
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
Evaluate_Net(3, 2, verbosity=0)
print("Training 3 digits with 3 pixels in error")
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
Evaluate_Net(3, 3, verbosity=0)
print("Training 3 digits with 4 pixels in error")
Evaluate_Net(3, 4, verbosity=0)
Evaluate_Net(3, 4, verbosity=0)
Evaluate_Net(3, 4, verbosity=0)
Evaluate_Net(3, 4, verbosity=0)
Evaluate_Net(3, 4, verbosity=0)
Evaluate_Net(3, 4, verbosity=0)

But the big problem here is trying to train the network with more digits.

It doesn’t work even with error-free input for just one more digit…

print("Training more digits but with no pixel errors")
Evaluate_Net(training_size=4,  errors=0, verbosity=0)
Evaluate_Net(training_size=5,  errors=0, verbosity=0)
Evaluate_Net(training_size=6,  errors=0, verbosity=0)
Evaluate_Net(training_size=7,  errors=0, verbosity=0)
Evaluate_Net(training_size=8,  errors=0, verbosity=0)
Evaluate_Net(training_size=9,  errors=0, verbosity=0)
Evaluate_Net(training_size=10, errors=0, verbosity=0)

The network just doesn’t have the capacity to learn more digits. Learning new digits results on old ones getting forgotten. This is the problem with Hopfield networks. They need around 7 or more neurons per training item. The network here just doesn’t have enough neurons and has a limit consistent with this.

More typical neural nets are ‘non-recurrent’ and employ back-propagation:

  • There are no loops in the network. Paths through the network run from inputs through one or more neurons to outputs but never back on themselves.
  • Usage (‘recall’) is easy and literally straight-forward: the calculations are performed from inputs, forward, through to the outputs.
  • Training is more complex, using the back-propagation algorithm to determine synaptic weights (more on that later).

In contrast, learning in Hopfield networks is easy and recall requires more effort.

Hopfield networks are more obviously in keeping with the biological brain:

  • They are recurrent.
  • Recall is performed by presenting a stimulus to which the network responds, eventually settling down on a particular state.
  • There is a process that is obviously analogous to Hebbian learning, in which ‘neurons that fire together – wire together’.
Advertisements
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s