Cicada ---> Online Help Docs

Example: memory networks in Cicada

Cicada originally aimed to be a neural network engine, missed by a mile, and ended up as a scripting language. So let’s use it to implement an associative neural network: a type of pattern-learning algorithm which will learn to associate inputs with desired outputs.

Algorithm: We’ll use a training algorithm for symmetric networks (J. R. Movellan, Contrastive Hebbian learning in interactive networks, 1990) which trains in two phases: clamped, and free. In the clamped phase, set the input neuron activities to the network input, and set the output neuron activities to the pattern we are trying to learn. Then repeatedly adjust the other ‘hidden’ neurons’ activities xi using the formula:


   xi  <--  1 / [ 1 + exp(-∑j wij xj - bi) ].

Here wij is the connection weight from neuron j to neuron i, and bi is a bias. Once the network settles into a steady state, apply the following training rule to the weights and biases:


   wij  <--  wij + η · xi xj

    bi  <--  bi + η · xi

Next, repeat the process for the free phase of operation, where that the activities of the output neurons update freely using the same rule as the hidden neurons. After reaching steady state, a negative weight/bias adjustment is applied. The idea is to reinforce correlated firing patterns on the difference between what the network should vs. does currently compute. As the saying goes, neurons that fire together, wire together.

To use the network to recover a memory, run it in its ‘free’ mode without applying any training.


Prev: Summary    Next: C implementation


Last update: November 12, 2025