Selecting a proper ANN module

Selecting a proper ANN module

Preparing training data

The learning data set for the neural network training has special formats. The data set here shows an example for the FANN training and its first row of the data includes the number of the training data sets (26), the number of inputs (100), and the number of outputs (26). The next row has 100 input data and the following row includes the output teaching data. This is repeating 26 times. In this experiment, the data set is generated for the Arial font from A to Z for the 100 line receptors. For information on line receptors, please check Neural Network OCR. The 1 in the data file represent that the receptor crosses the input pattern to recognize, and -1 for not. You can download it at here: Media:arial-font.txt.

Image:line_receptors.png|Line receptors sample Image:receptor_encoding.png|Line receptor encoding

ANN training performance comparision

The above data set is applied to two ANN engines as their inputs for the speed comparison (For other ANN engines and reviews, please visit our Neural Network page). Both experiments were performed on the same PC configured with two Intel 2.4GHz Xeon CPUs, 2GB Memory, and 160GB hard disk. Both ANN networks are designed to have 100 inputs, 1 hidden layer with 125 nodes and 26 outputs. Each input represents the one receptor and each output, A-Z (26), represents the recognized output pattern. The output which has the highest value between (1 to -1) will be accepted as the recognized character.

  • AForge ANN Engine written in C#: Learning rate 0.7, Error limit 0.01, Time 7.51 seconds
  • FANN written in C: Learning rate 0.7, Error limit 0.01, Time 0.828 second.


Training result analysis

FANN shows very fast converging behavior in its learning specifically for our given learning data set. It does not show the comparable deviation to different learning lates that we may understand that the level of complexity or required dimensions to separate input data sets are relatively low.


A trained FANN network is reusuable for the future computation. It is available in two formats: and

A fixed net represents the network that utilizes the integer computation for the machine that does not supports float computing like iPAQ or to get some advantages from fast integer computing. For this, our data set also should have a integer format that you can download at here:

The actual test output for the trained FANN network are given here: Media:fann_reconnition_result.txt. Comparing this result with the intermediate output of ANN trained only 35 iterations will give you a clear understanding on how the ANN is adapting its weights to the data domain: Media:fann_reconnition_result_after_35_iteration.txt.

The data set for above two charts are at here. Media:learning_curve_05252006.xls

FANN bug fix

Current FANN 2.1 engine (07/06/2006) has several bugs to be compiled as libraries for other programs. There are errors in their header definition and also several source codes. Fixed version is available for downloading here. FANN 2.1.0 VS6.0 bug fixed