Cicada ---> Online Help Docs ---> Example: memory networks in Cicada

Writing and debugging a Cicada wrapper

Once inside Cicada, we can run our neural network C function by typing


    > $runNetwork(...)
   

The main danger here is that we might pass the wrong datatypes, or arrays of the wrong sizes, into our C code, which will then promptly crash. To do things properly, let’s write a Cicada class that stores the neural network data and provides methods for initializing, running and training the network.

This will be a learning exercise. It’s our first time scripting so there will probably be a few bugs.

NN.cicada


neural_network :: {
   
   numNeurons :: int
   numInputs :: numOutputs :: numHiddens
   
   weights :: [][] double
   activity :: [] double
   
   
   init :: {
       
       if trap( { numInputs, numOutputs, numHiddens } = args ) /= passed then (
           print("usage: myNN.init(inputs, outputs, hidden neurons)\n")
           return 1       )
       
       numNeurons = numInputs + numOutputs + numHiddens + 1
       
       activity[^numNeurons]
       weights[^numNeurons][^numNeurons]
   }
   
   
   process :: {
       
       numArgs :: rtrn :: int
       step_size :: learning_rate :: double
       inputs :: outputs :: [] double
       
       inputs[^1] = 1      // the 'bias' input
       
       
       code
       
       numArgs = top(args)
       
       if trap(
           inputs[^numInputs + 1]
           inputs[<2, numInputs+1>] = args[1][]
           if numArgs == 4 then (
               outputs[^numOutputs]
               outputs[<1, numOutputs>] = args[2][]
               { step_size, learning_rate } = { args[3], args[4] } )
           else if numArgs == 2 then &
               step_size = args[2]
           else throw(1)
       ) /= passed then (
           print("usage: myNN.process(input, step_size OR ",
                 "input, target output, step_size, learning_rate)\n")
           return 1       )
       
       if numArgs == 2 then &
           rtrn = $runNetwork(weights, activity, inputs, step_size)
       else &
           rtrn = $runNetwork(weights, activity, inputs, step_size, outputs, learning_rate)
       
       if rtrn == 1 then print("process() did not converge; try lowering step size?\n")
   }
   
   
   init(0, 0, 0)
}

Save NN.cicada in the same directory as our NN program. Then, from Cicada’s command prompt, let’s try out our new wrapper by typing:


    > run("NN")
   
    Error: left-hand argument expected in file NN
   
    29:         inputs[^1] = 1      // the 'bias' input
                                      ^
   

What we see here is a ‘compile-time’ error (i.e. it failed to produce bytecode). Evidently we wrote a C-style comment ‘//’ in place of a Cicada comment ‘|’. Make the straightforward fix to NN.cicada.


           inputs[1] = 1      | the 'bias' input
   

and try again.


    > run("NN")
   
    Error: member 'numHiddens' not found in file NN
   
    4:     numInputs :: numOutputs :: numHiddens
                                      ^
   

This is progress: at least NN.cicada is syntactically correct. Line 4 tried to define numInputs and numOutputs to be of type numHiddens, rather than defining all three variables as type int, so let’s fix that:


       numInputs :: numOutputs :: numHiddens :: int
   

and re-run our script.


    > run("NN")
   
    usage: myNN.init(inputs, outputs, hidden neurons)
   

This time the script successfully ‘compiled’ and ran.. although init() produced an odd usage message despite never having been run. But at least a neural_network object was constructed, so we can start looking around inside, using the command prompt as a sort of debugger. init() is suspicious so let’s see if it works when we do run it.


    > neural_network.init(3, 4, 5)
   
   
    >
   

So far so good(?). There should now be 13 neurons in our network (including the ‘bias’ neuron).


    > neural_network.activity
   
    { }
   

So something is definitely wrong. To take a better look around, let’s ‘go’ inside our network.


    > go(neural_network)
   
   
    > weights
   
    { }
   
   
    > numNeurons
   
    0
   
   
    > go()
   

The last line takes us back to our ‘root’ workspace.

So our init() call was a dud -- nothing happened. Our next step might be to put a trace statement in the coding section of the init() function.. hmm, wherever that is.. Looks like we forgot a code marker separating the function variables from its executable code, which fully explains why it won’t run. The init() method should begin:


       init :: {
           
           code
           
           if trap( { numInputs, numOutputs, numHiddens } = args ) /= passed then (
   

Making that final change, let’s go back and try


    > run("NN"), neural_network.init(3, 4, 5)
   
   
    > neural_network.activity
   
    { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
   

Finally we see what we were hoping for: an array of neurons, initialized to a resting state and ready to begin training.


Prev: Putting the C in Cicada    Next: The Anagrambler


Last update: November 12, 2025