CroftSoft / Portfolio

BackpropXOR


The source code is available.

This Java simulation implements the backpropagation error learning algorithm.
The network has two synaptic layers with two inputs and a single output.
The network can be trained to emulate the functions of XOR, AND, OR, etc.
The hidden layer receives the two inputs plus a constant bias input of 1.0.
The output layer receives the two hidden layer outputs plus a bias input as well.

I prefer a learning rate of 0.1.
The momentum constant is stable from 0.0 to just under 1.0.
You can change the function to be learned by double-clicking the picklist.

At each iterations, random inputs from 0.0 to 1.0 are presented to the inputs.
The screen updates once every 100 iterations.

The strip plot shows RMS error for a 100 iterations.
The vertical axis goes from 0.0 to 1.0 (the RMS error).
The horizontal axis goes from 100 to the current number of iterations.

The X-Y scatter plot shows how the neural net classified an input point.
Green indicates that the output was >= 0.5, red indicates < 0.5.
The first input, A, is on the horizontal axis and B is on the vertical.

While the linearly separable functions converge fairly rapidly, the XOR and XNOR functions may take up to 20,000 iterations or so. The RMS error may not drop that much for these functions but you will be able to witness the performance improve in the X-Y scatter plot. If it doesn't train within a reasonable amount of time, re-randomize the weights as it may be stuck in a local minimum.

One thing you might try is setting the function to AND, randomizing the weights to restart everything, let it train on the AND function for a few thousand iterations, then switch the function to NAND. The RMS error will shoot up to 1.0 briefly until it can retrain.

If you have any suggestions or comments about this program, please feel free to contact me.
My e-mail address can be found at http://www.alumni.caltech.edu/~croft.

-- David Croft