Back-Propogation Network

1.  Enter up to 4 sets of numerical patterns, and then check the ones you want the network to "learn" .  
1. x1 x2 x3 x4 x5
 
2. x1 x2 x3 x4 x5
     
3. x1 x2 x3 x4 x5
           
4. x1 x2 x3 x4 x5

2.  Click on to encode these patterns into the network. The more you click on learn, the better the network will perform.

3.  The result is a network that can "classify" data.  The 1 input layer, 1 hidden layer, 1 output layer network will choose the learned pattern 1, 2, 3, or 4 it thinks best predicts the 5 given input values for x.
x1 x2 x3 x4 x5

1 2 3 4
To see this in action, enter 5 values for xj and press .  The output will be a measure of the "best match" of the given input to the learned inputs.

If you want to start over, simply push

 

What is going on here?  Between each of the two layers is a set of weights which are trained in the learning algorithm using the Back Propogation Method.  The weights are chosen so as to minimize the total squared error between the desired "target" patterns and the actual patterns.  Often, the learning rule is applied iteratively until the total error is below a given tolerance.  Here we simply appy the learning rule 100 iterations at a time.

Also, because of the relatively small number of cells in the hidden layer, this network does not perform well.  Theoretically, there are collections of patterns in which the hidden layer must  become arbitrarily long in order to encode all the patterns to within a reasonable error.