What is going on here? Between each of the two layers is a set of weights which are trained in the learning algorithm using the Back Propogation Method. The weights are chosen so as to minimize the total squared error between the desired "target" patterns and the actual patterns. Often, the learning rule is applied iteratively until the total error is below a given tolerance. Here we simply appy the learning rule 100 iterations at a time.
Also, because of the relatively small number of cells in the hidden layer, this network does not perform well. Theoretically, there are collections of patterns in which the hidden layer must become arbitrarily long in order to encode all the patterns to within a reasonable error.