My previous post (‘Spiky Thoughts’) set out some thoughts about how an efficient simulation of a spiking neural network might be achieved. It advocated:

- Event-driven as opposed to cycle-based simulation.
- Using simple ‘leaky integrate and fire’ (LIF) models of neurons, as opposed to models that reflected actual biological neuron behaviour more accurately.
- Being trained through ‘spike time-dependent plasticity’ (STDP), as opposed to the normal methods of ‘back-propagation’.

I hinted at training a spiking network to be an XOR gate – for demonstration purposes. Here, I provide some code to implement a spiking net to be an XOR gate, but I ignore all three previous motivations!:

- it is cycle-based,
- it uses the Izkikevich model of a neuron, and
- it doesn’t use STDP training – in fact, it doesn’t do training as such at all).

Why? Because it is a starting point:

- to demonstrate that a tiny network can behave as an XOR gate.
- to provide a reference point to see just how much more efficient an event-driven and/or LIF model implementation can be.
- to provide a ‘half-trained’ set of weights to help tweak STDP learning (it is easier to debug something you know is working but can be improved).

## Spiky Logic

The C code below is based on the open-source code by Felipe Galindo Sanchez (see the research paper here). It has been re-written for my understanding and there is a slight modification. __His__ inputs are __single__ spikes on the A and B inputs of the XOR gate, with these spikes timed against a reference signal. The time that spike is injected into the network identifies whether it is a logic 0 (aligned with the reference spike) or logic 1 (5ms later). But I use the distance __between 2 spikes__ to encode a logic 0 or logic 1, 6ms for the former, 12ms for the latter.

Both schemes demonstrate the time-dependent nature of spiking neurons, in contrast with second-generation __rate__ neurons. The firing rates of my inputs and outputs is 2 firings per epoch in all cases. Note: an epoch is the interval in which 1 calculation (inference) is performed. It is 30ms here.

The code includes a mechanism to adjust the input away from the rigid 6ms/12ms specification:

- skewing the spikes between the XOR gate’s inputs A and B, and
- modifying the distnace between input spikes away from the precise 6ms/12ms values.

The network does not perform well in this regard but making this more robust (such as by using a larger network) is not of interest here.

## The Network

The network comprises 10 nodes:

- nodes 0…2: these are 3 input nodes, fully-connected to…
- nodes 3…8: these are 6 neurons in a hidden layer, fully-connected to…
- node 9: this is the 1 output neuron.

Nodes 0 and 1 are the A and B inputs for the XOR circuit. Node 2 is a training input but is not used because…

## The Training

As already noted, there is no training as such. There are just 2 inputs (4 input states) and with limited variations or input spike timing (skewing and stretching). If there are 3 possible variations of skew (B input synchronized with the A input, 1ms earlier or 1ms later), there are only 3 x 4 = 12 input combinations.

So we simulate 12 runs in turn where each run lasts for 30ms (each run is called an ‘epoch’; simulation time is sliced into these epochs with inputs applied at the start and the output response judged at the end of the epoch).

There are only 24 weights in the network: 3 x 6 between the 3 inputs and the 6 hidden neurons and another 6 from those hidden neurons to the 1 output neuron.

A ‘random search’ algorithm is performed, which is just a fancy way of saying that we randomly generate sets of weights, simulate using them and monitor how good the result was (with an ‘error measurement’) and keep hold of the conditions that produced the most accurate output response. This provides all the infrastructure for training, without doing anything more complicated in which one set of tests uses weights that are derived from a previous run that has proved to be good.

Here is the basic code for the infrastructure for injecting inputs, simulating a network and judging the response of the network:

#include <stdio.h> #include <stdlib.h> #include <math.h> // Randomization #define get_random_logic() ((rand()<(RAND_MAX/2)) ? 1 : 0) // 50% either 0 or 1 // Reporting: how much gets printed to stdout #define QUIET 0 #define VERBOSE 1 // Function prototypes void inject_input_spikes( int node, // The input in question int logic_value, // Inject a 0 or 1? int step, // Current simulation times step int offset, // By how much the input spikes will be injected later/earlier int stretch, // By how much the input spikes will be stretched apart/closer int reporting); // Whether it will report details to stdout void reset_neuron_states(void); // All nodes (inputs and neurons) to their equilibrium state void update_neuron( int step, // Current simulation times step int layer, // Neuron layer: distinguish input/hidden/output layer int neuron, // The neuron in question int from_synapse, // Synapse connections are from all neurons from this node... int to_synapse); // ... to this node, inclusive. int score_output( int wanted_output); // Should the output be a 0 or 1? void dump_flags( int step); // Current simulation times step // Success counters #define INVLD 0 #define BAD 1 #define GOOD 2 // Output error measurement // e.g. anything within 2 of the ideal answer is 'good' // Squaring: the error in this case will be 4. #define NOT_BAD 4 #define WORST_SCORE 25 ///////////////////////////////////////////////////////////// // Network configuration ///////////////////////////////////////////////////////////// // Nodes: // 0..1 Inputs A and B into XOR gate // 2 Training neuron, set to A xor B // 3..8 Hidden neurons // 9 Output neuron // The code expects synapse numbers to be contiguous: // * Hidden layer uses synapses from 0..2 // * Output layer uses synapses from 2..8 #define INPUTS_LAYER 0 #define HIDDEN_LAYER 1 #define OUTPUT_LAYER 2 #define NUM_INPUTS 2 #define NUM_TRAINING_INPUTS 1 #define NUM_LAYER1_NEURONS 6 #define NUM_LAYER2_NEURONS 1 #define NUM_NODES (NUM_INPUTS + NUM_TRAINING_INPUTS + NUM_LAYER1_NEURONS + NUM_LAYER2_NEURONS) // Inputs and neurons #define MAX_NUM_SYNAPSES (NUM_INPUTS + NUM_TRAINING_INPUTS + NUM_LAYER1_NEURONS) #define TRAINING_INPUT_NODE (NUM_INPUTS) #define OUTPUT_NODE (NUM_NODES - 1) #define FIRST_HIDDEN_NODE (NUM_INPUTS + NUM_TRAINING_INPUTS) #define NUM_LAYER1_SYNAPSES (NUM_INPUTS + NUM_TRAINING_INPUTS) #define NUM_LAYER2_SYNAPSES (NUM_LAYER1_NEURONS + NUM_TRAINING_INPUTS) // Each simulation epoch is simulated for this many time steps... #define NUM_STEPS 30 // Variation of input spike times; 3 means -1...+1 #define DEV 0 // Maximum deviation from ideal #define VAR (2*DEV+1) // Variation #define ALL_COMBINATIONS (4*VAR) ///////////////////////////////////////////////////////////// // Global variables ///////////////////////////////////////////////////////////// // Outputs: values for spike_train_state #define SPIKE_TRAIN_INIT 0 #define SPIKE_TRAIN_INVALID -1 #define SPIKE_TRAIN_DELTA_OFFSET 100 #define SPIKE_TRAIN_LOGIC0 6 #define SPIKE_TRAIN_LOGIC1 12 int spike_train_state[NUM_INPUTS+NUM_LAYER1_NEURONS+NUM_LAYER2_NEURONS]; // For determining output value from spike times int spike_flags[NUM_INPUTS+NUM_LAYER1_NEURONS+NUM_LAYER2_NEURONS]; // Flags for node spikes (inputs and neurons) /////////////////////////////////////////////// // Main simulation /////////////////////////////////////////////// float run_xor_snn( int num_epochs, // Number of simulations (with different inputs) to run int teaching, // Is this a teaching run (with modification of weights) or just a test? int reporting) // Will lots of status be reported to stdout? { // XOR function: the function to be learnt by the network int input_a, input_b, output_y; int neuron; int epoch; int step; int error; int cum_error; // Cumulative score over all epochs float score; int count[4]; int input_skew; // for skewing inputs A and B w.r.t. one another int input_stretch; // for stretching width of inputs (shorter or longer) // No previous spikes; set the times to 'ages ago' for(neuron=0; neuron <= OUTPUT_NODE; neuron++) { spike_train_state[neuron] = SPIKE_TRAIN_INIT; } // A record of progress... cum_error = 0; count[GOOD] = 0; count[BAD] = 0; count[INVLD] = 0; // Main simulation loop for(epoch=0; epoch < num_epochs; epoch++){ if (epoch < ALL_COMBINATIONS) { // Systematic stimulus // Going through all the combinations // of input logic and timing variations switch(epoch % 4) { case 0 : input_a = 0; input_b = 0; break; case 1 : input_a = 0; input_b = 1; break; case 2 : input_a = 1; input_b = 0; break; case 3 : input_a = 1; input_b = 1; break; } input_skew = (DEV==0) ? 0 : ((epoch >> 2)%VAR) - DEV; // -DEV...+DEV input_stretch = 0; } else { // Randomized stimulus input_a = get_random_logic(); input_b = get_random_logic(); input_skew = (DEV==0) ? 0 : (rand() %VAR) - DEV; // -DEV...+DEV input_stretch = (DEV==0) ? 0 : (rand() %VAR) - DEV; // -DEV...+DEV; } output_y = input_a ^ input_b; // XOR: This is the correct result if (reporting==VERBOSE) printf("EPOCH %d STIM %d XOR %d\n", epoch, input_a, input_b, output_y); for(neuron=0; neuron <= OUTPUT_NODE; neuron++) spike_train_state[neuron] = SPIKE_TRAIN_INIT; for(step=0; step < NUM_STEPS; step++){ //printf("STEP %d \n", step); for(neuron=0; neuron <= OUTPUT_NODE; neuron++) spike_flags[neuron] = 0; // Ensure all flags are clear /*********** Generate inputs ***********/ inject_input_spikes(0, input_a, step, 0, input_stretch, reporting); inject_input_spikes(1, input_b, step, input_skew, 0, reporting); // Delay by -2 to +2 if(teaching) inject_input_spikes(2, output_y, step, 0, 0, reporting); /*********** Forward update of network ***********/ // [Fixed size!!!] for(neuron=0; neuron <= 2; neuron++) // Inputs update_neuron(step, 0, neuron, 0, 0); for(neuron=3; neuron <= 8; neuron++) // Hidden layer update_neuron(step, 1, neuron, 0, 2); for(neuron=9; neuron <= 9; neuron++) // Output layer update_neuron(step, 2, neuron, 2, 8); if (step == 0) { reset_neuron_states(); } if (reporting==VERBOSE) dump_flags(step); } /*********** Determine how good the outputs were ***********/ error = score_output(output_y); // a high score is bad cum_error += error; if (error <= NOT_BAD) { count[GOOD] += 1; if (reporting==VERBOSE) printf("BADNESS good wanted %d trainstate %d error %d cumulative %d\n", output_y, spike_train_state[OUTPUT_NODE], error, cum_error); //if (reporting==VERBOSE) printf("RESULT good %d XOR %d shouldbe %d (now %d %d %d)\n", input_a, input_b, output_y, count[GOOD], count[BAD], count[INVLD]); } else if (error == WORST_SCORE) { count[INVLD] += 1; if (reporting==VERBOSE) printf("BADNESS invalid wanted %d trainstate %d error %d cumulative %d\n", output_y, spike_train_state[OUTPUT_NODE], error, cum_error); //if (reporting==VERBOSE) printf("RESULT invalid %d XOR %d shouldbe %d (now %d %d %d)\n", input_a, input_b, output_y, count[GOOD], count[BAD], count[INVLD]); } else { count[BAD] += 1; if (reporting==VERBOSE) printf("BADNESS bad wanted %d trainstate %d error %d cumulative %d\n", output_y, spike_train_state[OUTPUT_NODE], error, cum_error); //if (reporting==VERBOSE) printf("RESULT bad %d XOR %d shouldbe %d (now %d %d %d)\n", input_a, input_b, output_y, count[GOOD], count[BAD], count[INVLD]); } } // Worst possible error is WORST_SCORE*num_epochs // Make the score within a range of 0.0 (worst) to 1.0 (perfect) score = 1.0 - (float)cum_error / (float)(WORST_SCORE*num_epochs); if (reporting==VERBOSE) printf("GOODBADINV good %d bad %d invalid %d score=%.3f\n", count[GOOD], count[BAD], count[INVLD], score); return score; // Overall score of how big the errors were over all the simulation epochs } // run_xor_snn // To display the maps of spiking of all the neurons over the timesteps void dump_flags(int step) { int neuron; printf("STEP %4d SPIKES ", step); for(neuron=0; neuron <= OUTPUT_NODE; neuron++) if (spike_flags[neuron]) printf("1"); else printf("."); printf("\n"); }

The scoring system for selecting the best set of weights is as follows:

- If a logic 0 is the correct output and the network generates 2 spikes, 6ms apart, that is a perfect output and the error value is 0.
- If they are 7ms or 9ms apart, the distance away from ideal is 1 and the error value is the square of this (1!).
- Likewise, spikes 6ms or 10ms apart produce an error of 4, and so on up to a maximum error score of 25 for wider deviations (which is deemed to be ‘bad’).
- The same scheme applies to logic 1 with the ideal separation on 12ms.
- If there are not exactly 2 spikes, the output is deemed to be ‘invalid’ and given a maximum error score of 25.

When it is run it will report spike maps like this (1 xor 1 produces a correct 0) which show which neurons fire when (input node 0 is left-most, output neuron 9 is right-most; a 1 indicates the node is firing)…

EPOCH 3 STIM 1 XOR 1 STEP 0 SPIKES .......... STEP 1 SPIKES .......... STEP 2 SPIKES .......... INSPIKE neuron 0 start step 3 INSPIKE neuron 1 start step 3 STEP 3 SPIKES 11........ STEP 4 SPIKES .......... STEP 5 SPIKES ...11...1. STEP 6 SPIKES .......... STEP 7 SPIKES .......... STEP 8 SPIKES ...1....1. STEP 9 SPIKES ....1..... STEP 10 SPIKES .........1 STEP 11 SPIKES .......... STEP 12 SPIKES ...1...... STEP 13 SPIKES ........1. STEP 14 SPIKES .......... INSPIKE neuron 0 logic1 step 15 INSPIKE neuron 1 logic1 step 15 STEP 15 SPIKES 11..1..... STEP 16 SPIKES ...1.....1 STEP 17 SPIKES ........1. STEP 18 SPIKES ....1..... STEP 19 SPIKES ...1...... STEP 20 SPIKES .......... STEP 21 SPIKES ........1. STEP 22 SPIKES ...11..... STEP 23 SPIKES .......... STEP 24 SPIKES .......... STEP 25 SPIKES .......... STEP 26 SPIKES ...1...... STEP 27 SPIKES .......... STEP 28 SPIKES .......... STEP 29 SPIKES .......... OUTPUT good wanted 0 trainstate 106 error 0 cumulative 1 GOODBADINV good 4 bad 0 invalid 0 score=0.990

This shows the last of 4 runs (epochs) which produced a perfect output (error=0). All 4 runs produced good outputs but one of them, had the timing off by 1ms.

## The Neuron

Eugene Izhikevich’s model of the neuron (IEEE Transactions on Neural Networks, vol. 14 no. 6, November 2003) will be used instead of the leaky integrate-and-fire model. The model is expressed as differential equations with 2 variables (u and v) and 4 parameters (a, b, c and d) and shown in the figure below. The variable *v* represents the membrane potential – the ‘output voltage’ which spikes. Note, in case you are unfamiliar with the notation: *v’* represents the gradient *dv/dt* and *u’* represents *du/dt*. But you don’t need to worry about the math – just see the ‘Izhikevich equations’ part of the code below.

Figure credit: IEEE

In Galindo Sanchez’s C code, the parameters

*a*=0.02,*b*=0.2,*c*=-65mV,*d*=8

are used for excitatory neurons and

*a*=0.10,*b*=0.2,*c*=-65mV,*d*=2

are used for inhibitory neurons.

Thus, according to Izhkevich’s classification, we are using Regular Spiking(RS) excitatory and Fast Spiking (FS) inhibitory neurons.

// Model time step #define HALF_MILLISECOND 0.5 #define MILLISECOND 1.0 //Decay of synapses: #define TAU_S 10.0 // Time constant (ms) #define S_DECAY_FACTOR (1.0 -(MILLISECOND / TAU_S)) #define EXCITATORY_SYNAPSE_POTENTIAL 0.0 // mV Excitatory synapse potential #define INHIBITORY_SYNAPSE_POTENTIAL -85.0 // mV Inhibitory synapse potential // Inhibitory / excitatory neurons #define NUM_INHIBITORY 2 int neuron_is_inhibitory(int neuron) { // Only the first NUM_INHIBITORY neurons in the hidden layer are inhibitory if ((neuron >= FIRST_HIDDEN_NODE)&&(neuron < FIRST_HIDDEN_NODE+NUM_INHIBITORY)) return 1; else return 0; } // function prototype, for spikes generated either at inputs or by neurons: void generate_spike(int neuron, int step); // Store for all nodes; many values in these arrays will not be used // Only use from NUM_INPUTS onwards float syn_s[NUM_NODES]; float syn_weights[NUM_NODES][MAX_NUM_SYNAPSES]; float izh_u[NUM_NODES]; // Izhikevich model 'u' state float izh_v[NUM_NODES]; // Izhikevich model 'v' state void update_neuron(int step, int layer, int neuron, int from_synapse, int to_synapse) { /************* Update synaptic inputs *****************/ int synapse; syn_s[neuron] *= S_DECAY_FACTOR; if (izh_v[neuron] >= 35.0) { // firing input (t-1) syn_s[neuron] += 1.0; // A neuron spiking will create a decaying post-synaptic potential } else if ((layer==INPUTS_LAYER) && (spike_flags[neuron] == 1)) { // synaptic input syn_s[neuron] += 1.0; // Likewise, create a decaying post-synaptic potential } /************* Update synaptic conductances *****************/ // sum of weights float sum_g_excit = 0.0; float sum_g_inhib = 0.0; // Perform sum of synaptic conductances per neuron if (layer > INPUTS_LAYER) { for (synapse = from_synapse; synapse <= to_synapse; synapse++) { if (neuron_is_inhibitory(synapse)) sum_g_inhib += syn_weights[neuron][synapse] * syn_s[synapse]; else sum_g_excit += syn_weights[neuron][synapse] * syn_s[synapse]; } } /************* Izhikevich equations *****************/ float dv, du; // deltas: dv/dt and du/dt float izh_I; int iteration; if (layer==INPUTS_LAYER) { // Not used (setting to equilibrium point just for information only) izh_v[neuron] = -70.0; izh_u[neuron] = -14.0; } else if (izh_v[neuron] >= 35.0) { // Firing; parameter c=-65mV izh_v[neuron] = -65.0; // Reset (refractory period); parameter c=-65mV izh_u[neuron] += neuron_is_inhibitory(neuron) ? 2.0 : 8.0; // ms; parameter d generate_spike(neuron, step); } else { // not firing izh_I = (sum_g_excit * (EXCITATORY_SYNAPSE_POTENTIAL - izh_v[neuron])) + (sum_g_inhib * (INHIBITORY_SYNAPSE_POTENTIAL - izh_v[neuron])); for(iteration=1; iteration<=2; iteration++) { // Two 0.5 ms steps // v' = 0.04v^2 + 5v + 140 - u + I (1) dv = (((0.04 * izh_v[neuron]) + 5.0) * izh_v[neuron]) + 140.0 - izh_u[neuron] + izh_I; // u' = a(bv - u) (2) if(neuron_is_inhibitory(neuron)) { du = 0.1 * ((0.2 * izh_v[neuron]) - izh_u[neuron]); // parameters a and b } else { du = 0.02 * ((0.2 * izh_v[neuron]) - izh_u[neuron]); // parameters a=0.02; b=0.2 } izh_v[neuron] += (dv * (HALF_MILLISECOND)); izh_u[neuron] += (du * (HALF_MILLISECOND)); } if(izh_v[neuron] > 35.0) { // Saturate izh_v[neuron] = 35.0; } } return; } // Force all neurons to their equilibrium state void reset_neuron_states(void) { int neuron; for(neuron=0; neuron <= OUTPUT_NODE; neuron++) { izh_u[neuron] = -14.0; izh_v[neuron] = -70.0; syn_s[neuron] = 0.0; } }

The *spike_flags[n]* indicates whether node n has fired and is used for printing out spike train maps for visualization and, here, for handling __input__ spikes (as opposed to spikes from other neurons).

It is the synaptic current (*izh_I*) that feeds into the Izhikevich-model. This is the (exponentially decaying) sum of the currents originating from synaptic firing from all the neuron’s synapses. If we are bothered about units, we are generating a current (I) from voltages. The ‘g’ in the *sum_g_excit* and *sum_g_inhib* variables is the standard notation for conductance which is the reciprocal of electrical resistance (there is the well-known equation *V=IR* and its lesser-known opposite equation *I=gV*). But basically, we are just calculating a weighted sum of the inputs to produce a ‘postsynaptic potential’ (‘potential’ = voltage) that controls the (Izhikevich-modelled) firing of the neuron (it fires when excitatory firings push the membrane voltage v up above 30mV).

The weights are stored in the 10 x 10 array *syn_weights*, but it is only the 3 x 6 input-to-hidden-layer connection values and 6 x 1 hidden-to-output-layer connection values that are used.

## Inputs and Outputs

Inputting spikes into the network is a straightforward setting of flags but,

in generating spikes, we maintain a ‘*spike_train_state*’ so that we can score the output of the network in order to optimize weight settings.

// Inputs #define START_TIME 3 // Just after resetting everything (but allow for -2 input_skew) #define INPUT0_TIME 9 // Separation of 6 #define INPUT1_TIME 15 // Separation of 12 void generate_spike(int neuron, int step) { // Put information into the various data structures spike_flags[neuron] = 1; // Sequence of spike_train_state values: // 0. Initially 100 (no spikes can be generated at this time) // 1. Then set to time of 1st spike (range 1...99) // 2. Then set to the delta between 1st and 2nd spikes plus 100. // 3. If more than 2 spikes, set invalid if (spike_train_state[neuron] == SPIKE_TRAIN_INIT ) // First spike {spike_train_state[neuron] = step; } // Time of 1st spike else if (spike_train_state[neuron] <= NUM_STEPS ) // Second spike { spike_train_state[neuron] = SPIKE_TRAIN_DELTA_OFFSET + step - spike_train_state[neuron]; } // Delta else { spike_train_state[neuron] = SPIKE_TRAIN_INVALID; } } void inject_input_spikes(int node, int logic_value, int step, int offset, int stretch, int reporting) { if (step==START_TIME + offset) { generate_spike(node, step); if(reporting==VERBOSE){printf("INSPIKE node %d start step %d\n", node, step);} } if ((step==(INPUT0_TIME + offset + stretch))&&(logic_value == 0)) { generate_spike(node, step); if(reporting==VERBOSE){printf("INSPIKE node %d logic0 step %d\n", node, step);} } if ((step==(INPUT1_TIME + offset + stretch))&&(logic_value == 1)) { generate_spike(node, step); if(reporting==VERBOSE){printf("INSPIKE node %d logic1 step %d\n", node, step);} } return; } int score_output(int wanted_output) { // To judge success int diff; int result; if (spike_train_state[OUTPUT_NODE] < SPIKE_TRAIN_DELTA_OFFSET) { // Not exactly 2 spikes result = WORST_SCORE; } else if (wanted_output == 1) { // error = difference squared diff = (spike_train_state[OUTPUT_NODE] - SPIKE_TRAIN_DELTA_OFFSET - SPIKE_TRAIN_LOGIC1); result = diff * diff; } else { // (wanted_output = 0) // error = difference squared diff = (spike_train_state[OUTPUT_NODE] - SPIKE_TRAIN_DELTA_OFFSET - SPIKE_TRAIN_LOGIC0); result = diff * diff; } if (result > WORST_SCORE) { result = WORST_SCORE; } //printf("OUTPUT wanted %d trainstate %d error %d\n", wanted_output, spike_train_state[OUTPUT_NODE], result); return result; }

## The Main Program

The *run_xor_snn* function, above, simulates the network with all the input combinations and provides a score on how well the network has performed. The main program calls this function over and over again (for *NUM_TRIALS* trials) with different (randomized) set of weights each time, keeping note of the best set of weights it has found and then re-run that best set at the end with verbose reporting. (For a large number of trials, the output will be gigabytes if the verbosity is not set to *QUIET* before this at the end.)

There is some coding in place in preparation for training but that will need to be developed a little bit further.

#define NUM_TRIALS 10000000 #define TEST 0 #define LEARN 1 #define TEACH 1 // Function prototypes void fix_training_weights(float weight); // Set all the weights from TRAINING_INPUT_NODE to this 1 value void clear_weights(void); void randomize_weights(float scaling, int from_neur, int to_neur, int from_syn, int to_syn); // e.g. randomize_weights(1.0, 3, 8, 0, 2) operates on all weights connecting input nodes 0..2 to neurons 3...8. // Simlarly for reporting... void dump_weights(float weights[NUM_NODES][MAX_NUM_SYNAPSES], int from_neur, int to_neur, int from_syn, int to_syn); void copy_weights(float from[NUM_NODES][MAX_NUM_SYNAPSES], float to[NUM_NODES][MAX_NUM_SYNAPSES]); void main(void) { int trial, verbosity, learning; float best_syn_weights[NUM_NODES][MAX_NUM_SYNAPSES]; verbosity = QUIET; //verbosity = either QUIET or VERBOSE learning = TEST; //learning = TEACH or TEST; srand(1); // seed float this_score, best_score; int best_trial = -1; best_score = 0.0; // Worst is the output of every trial being invalid for(trial=0; (trial<NUM_TRIALS) && (best_score < 1.0); trial++) { if ((trial%1000)==0) printf("TRIALNOW %d bestscore %f besttrial %d\n", trial, best_score, best_trial); if (verbosity==VERBOSE) printf("TRIAL %d\n", trial); clear_weights(); randomize_weights(1.0, 3, 8, 0, 2); // Hidden layer randomize_weights(1.0, 9, 9, 2, 8); // Output layer if(learning) { fix_training_weights(1.0); this_score = run_xor_snn(100, learning, verbosity); } else { fix_training_weights(0.0); this_score = run_xor_snn(ALL_COMBINATIONS, learning, verbosity || (trial == 18)); } if (verbosity==VERBOSE) dump_weights(syn_weights, 3, 8, 0, 2); // Hidden layer &syn_weights, if (verbosity==VERBOSE) dump_weights(syn_weights, 9, 9, 2, 8); // Output layer &syn_weights, if (this_score >= best_score) { best_score = this_score; best_trial = trial; printf("BESTSOFAR %f trial %d\n", best_score, best_trial); dump_weights(syn_weights, 3, 8, 0, 2); // Hidden layer &syn_weights, dump_weights(syn_weights, 9, 9, 2, 8); // Output layer &syn_weights, copy_weights(syn_weights, best_syn_weights); // Store for reference later this_score = run_xor_snn(ALL_COMBINATIONS, TEST, VERBOSE); // test for all 4 input combinations and all VAR skews } } printf("VERIFYBESTSCORE %f trial %d\n", best_score, best_trial); copy_weights(best_syn_weights, syn_weights); fix_training_weights(0.0); dump_weights(syn_weights, 3, 8, 0, 2); // Hidden layer &syn_weights, dump_weights(syn_weights, 9, 9, 2, 8); // Output layer &syn_weights, this_score = run_xor_snn(ALL_COMBINATIONS, TEST, VERBOSE); // test for all 4 input combinations and all 5 skews return; }

## And finally

And finally, below are the innards of the functions used above that have not yet been defined. They all handle weights.

void clear_weights(void) { int neuron, synapse; for(neuron=0; neuron < NUM_NODES; neuron++) { for(synapse=0; synapse < MAX_NUM_SYNAPSES; synapse++) { syn_weights[neuron][synapse] = 0.0; } } } #define get_random_weight() (2.0*rand()/RAND_MAX-1.0) // Float between -1.0 and +1.0 void randomize_weights(float scaling, int from_neur, int to_neur, int from_syn, int to_syn) { // Adds a scaled random amount to all weights. // Use to set weights initially after a clear // or to locally-disturb good weights during simulated annealing int neuron, synapse; for(neuron=from_neur; neuron <= to_neur; neuron++) { for(synapse=from_syn; synapse <= to_syn; synapse++) { syn_weights[neuron][synapse] += scaling * get_random_weight(); } } } void fix_training_weights(float weight) { // STDP may update these but we want to fix them to control training int neuron; for(neuron=0; neuron < NUM_NODES; neuron++) { syn_weights[neuron][TRAINING_INPUT_NODE] = weight; } } // For saving and restoring good sets of weights and potentially more... void copy_weights(float from[NUM_NODES][MAX_NUM_SYNAPSES], float to[NUM_NODES][MAX_NUM_SYNAPSES]) { int neuron, synapse; for(neuron=0; neuron < NUM_NODES; neuron++) { for(synapse=0; synapse < MAX_NUM_SYNAPSES; synapse++) { to[neuron][synapse] = from[neuron][synapse]; } } } void dump_weights(float weights[NUM_NODES][MAX_NUM_SYNAPSES], int from_neur, int to_neur, int from_syn, int to_syn) { int neuron, synapse; // STDP may update these but we want to fix them to control training for(neuron=from_neur; neuron <= to_neur; neuron++) { for(synapse=from_syn; synapse <= to_syn; synapse++) { printf("WEIGHT w[%d][%d] = %6.3f ;\n", neuron, synapse, weights[neuron][synapse]); } } return; }

Next time: We are now ready to add supervised learning to the code, to use STDP to train networks rather than performing a blind random search.