Notice that our C code doesn’t allocate/deallocate memory, load or prepare the data sets, save the results, or do any of the other miscellaneous operations that don’t need to happen thousands of times per second. We will script the remainder of our program, not only because Cicada automates most of these housekeeping functions but also because scripted functions can be controlled from the command line.
We bring our C functions into Cicada by referencing them in a Cicada file called userfn.c. First, make sure the C compiler knows about our neural network routines. It’s sloppy, but the easiest way to do this is to put NN.c and NN.h into the Cicada directory, and then add the following line to userfn.c.
...
// #include any user-defined header files here
#include "NN.c"
...
Next, we need to tell Cicada about our C routine in userfn.c, by adding it to the UserFunctions[] array. We provide both a name and a function address.
userFunction UserFunctions[] = { { "pass2nums", &pass2nums }, { "cicada", &runCicada },
{ "RunNetwork", &runNetwork } };
Now we need to flesh out the ‘set up data types’ comment in runNetwork(). It turns out that Cicada provides a handy function for reading data from argv (an array of pointers to the variables and arrays passed to the C code, as you would expect). Since the memory is shared between the two environments, our C function can also send data back to a script by writing to these variables. Cicada also passes a list of array types and sizes at the end of argv[]. Putting all together, we add the following lines of code at the beginning of runNetwork() (i.e. in place of the first comment in NN.c).
arg_info *argInfo = (arg_info *) argv[argc];
myNN.numNeurons = argInfo[1].argIndices;
numInputs = argInfo[2].argIndices;
getArgs(argc, argv, &myNN.weights, &myNN.activity, &inputs, byValue(&step_size), endArgs);
if (argc == 6) {
numOutputs = argInfo[4].argIndices;
getArgs(argc, argv, fromArg(4), &target_outputs, byValue(&learning_rate)); }
This code uses the arg_info data type, so we also need to add
#include "userfn.h"
at the beginning of NN.c.
Notice that we assigned myNN.weights, myNN.activity and inputs by reference rather than by value. One reason is that they are arrays and copying them would be time-consuming. The other reason is that the job of our routine is to modify activity and, if we are training our network, the weights array as well. On the other hand step_size is not a pointer variable, so we passed its value using the byValue() function.
We won’t try to save the results of our calculations in the C code. We can simply delete the “save results” comment line in NN.c.
The final step is to recompile Cicada with our source files. First, make sure all source and header files, including NN.c and NN.h, are in the same directory as ‘Makefile’; then go to that directory from the command prompt and type “make cicada CC=gcc” (case sensitive). (The ‘make’ tool has to be installed for this to work.) With luck, we’ll end up with an executable. To run it, type either ‘cicada’ or ‘./cicada’, depending on the system. We should see:
>
Once inside Cicada, we can call our neural network function by typing
$RunNetwork(...)
with the function’s arguments listed in place of the dots.
Our custom version of Cicada has fast, native neural network functionality, but it is hidden behind a clunky syntax. The next task is to write a Cicada class that bundles a neural network’s data with user-friendly functions that initialize, run and train that network.
Last update: May 8, 2024