Cicada ---> Online Help Docs ---> Example: neural networks in Cicada

Writing and debugging a Cicada wrapper

Let’s try our hand at writing a Cicada script to wrap around our C routine. It’s our first time scripting so we will probably have a few bugs.

NN.cicada


neural_network :: {
   
   numNeurons :: int
   numInputs :: numOutputs :: numHiddens
   
   weights :: [] [] double
   activity :: [] double
   
   
   init :: {
       
       if trap( { numInputs, numOutputs, numHiddens } = args ) /= passed then (
           print("usage: myNN.init(inputs, outputs, hidden neurons)\n")
           return 1       )
       
       numNeurons = numInputs + numOutputs + numHiddens + 1
       
       activity[^numNeurons]
       weights[^0][^0]       | to speed up the resize
       weights[^numNeurons][^numNeurons]
       
       return
   }
   
   
   run :: {
       
       numArgs :: rtrn :: int
       step_size :: learning_rate :: double
       inputs :: outputs :: [] double
       
       inputs[^1] = 1      // the 'bias' input
       
       
       code
       
       numArgs = top(args)
       
       if trap(
           inputs[^numInputs + 1]
           inputs[<2, numInputs+1>] = args[1][*]
           if numArgs == 4 then (
               outputs[^numOutputs]
               outputs[<1, numOutputs>] = args[2][*]
               { step_size, learning_rate } = { args[3], args[4] } )
           else if numArgs == 2 then &
               step_size = args[2]
           else throw(1)
       ) /= passed then (
           print("usage: myNN.run(input, step_size OR ",
                 "input, target output, step_size, learning_rate)\n")
           return 1       )
       
       if numArgs == 2 then &
           rtrn = $RunNetwork(weights, activity, inputs, step_size)
       else &
           rtrn = $RunNetwork(weights, activity, inputs, step_size, outputs, learning_rate)
       
       if rtrn == 1 then print("run() did not converge; try lowering step size?\n")
       
       return
   }
   
   
   init(0, 0, 0)
}

We should save NN.cicada in same directory that contains the cicada application, start.cicada and user.cicada. start.cicada runs the command prompt, and user.cicada pre-loads a number of useful functions.

Assuming we are at Cicada’s command prompt, we can try out our new wrapper by typing:


    > run("NN")
   
    Error: left-hand argument expected in file NN
   
    32:         inputs[^1] = 1      // the 'bias' input
                                      ^
   

Hmm.. we’re obviously not finished with NN.cicada yet. Fortunately compile-time errors like the one above are usually easy to sort out. In our case we accidentally wrote a C-style comment ‘//’ in place of a Cicada comment ‘|’, which Cicada interpreted as a pair of division signs. Make the straightforward fix to NN.cicada


           inputs[1] = 1      | the 'bias' input
   

and try again.


    > run("NN")
   
    Error: member 'numHiddens' not found in file NN
   
    4:     numInputs :: numOutputs :: numHiddens
                                      ^
   

We have made progress: NN.cicada successfully ‘compiled’ and began running --- before impaling itself on line 4 and duly filing a complaint. There’s no debugger, but we can often troubleshoot runtime crashes using the command prompt.


    > neural_network.numHiddens
   
    Error: member 'numHiddens' not found
   

With a little knowledge of the scripting language we would see that we tried to define numInputs and numOutputs to be of type numHiddens, which would be allowed except that numHiddens itself was never defined. What we meant to do was to define all three of these variables to be of type int. Make the following correction to line 4:


       numInputs :: numOutputs :: numHiddens :: int
   

and re-run our script.


    > run("NN")
   
    usage: myNN.init(inputs, outputs, hidden neurons)
   

This time the script successfully ‘compiled’ and ran, or at least did something (unexpected) -- it printed out a usage message even though we never tried initializing a neural network. Let’s see if init() works when we’re actually trying to run it.


    > neural_network.init(3, 4, 5)
   
   
    >
   

So far so good(?). There should now be 13 neurons in our network (including the ‘bias’ neuron).


    > neural_network.activity
   
    { }
   

So something is definitely wrong. At this stage we might want to look at a number of other variables that should have been set, and the easiest way to do that is to ‘go’ inside our network.


    > go(neural_network)
   
   
    > weights
   
    { }
   
   
    > numNeurons
   
    0
   
   
    > go()
   

The last line takes us back to our normal workspace. So our init() call was a dud -- nothing happened. So let’s put a trace statement in the coding section of the init() function.. wherever that is... ho ho, well we forgot the code marker, which explains why there isn’t any coding section. That should go at the beginning of init(); the method should now begin:


       init :: {
           
           code
           
           if trap( { numInputs, numOutputs, numHiddens } = args ) /= passed then (
   

etc. For the last time, go back and try


    > run("NN"), neural_network.init(3, 4, 5)
   
   
    > neural_network.activity
   
    { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
   

Finally we see what we were hoping for: an array of neurons, initialized to a resting state and ready to start processing.


Prev: Putting the C in Cicada    Next: The Anagrambler


Last update: May 8, 2024