Hopfield Network

The Hopfield network was developed in the early eighties. It differed from other neural networks of the day in three significant ways:

  1. It ''fires'' randomly rather than sequentially. That is, artificial neurons in the network are chosen at random and their states are updated accordingly.
  2. The ''on'' state corresponds to 1, while the off-state corresponds to -1 ( instead of the usual ''off'' state of 0 )
  3. weights $\omega _{ij}$ are assumed to be symmetric, which is to say that $\omega _{ij}=\omega _{ji}$

The third condition implies that in a Hopfield network, the axon of neuron i connects to the dendrites of neuron j with the same weight that the axon of neuron j connects to the dendrites of neuron i.

Condition 3 is not a realistic assumption, but it is a valuable one. To see its value, let us use a little matrix algebra. To begin with, the weight matrix W is defined to be \[ W=\left[ \begin{array}{ccccc} 0 & \omega _{21} & \omega _{31} & \ldots & \omega _{n1} \\ \omega _{21} & 0 & \omega _{32} & \ldots & \omega _{n2} \\ \omega _{31} & \omega _{32} & 0 & \ldots & \omega _{n3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \omega _{n1} & \omega _{n2} & \omega _{n3} & \ldots & 0 \end{array}% \right] \] Thus, condition 3 implies that W is a symmetric matrix. This is crucial to the rigorous analysis of the Hopfield network. Similarly, let us denote the the state vector and output vector, respectively, to be \[ S=\left[ \begin{array}{c} s_{1} \\ s_{2} \\ \vdots \\ s_{n} \end{array} \right] \qquad \mathrm{and}\qquad X=\left[ \begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array} \right] \] Then the "sum" component of the "sum and fire" algorithm can be written in matrix form as \[ S=WX \] and condition 1 allows us to consider the entire network to be a Markov Chain between the 2n possible output vectors of the system. (But that is beyond the scope of this presentation).