Weekly reports


During this week, I collected traffic signs on the web and finally found a very nice complete database maintained by Richard C. Moeur, who is a voting member of the National Committee on Uniform Traffic Control Devices (NCUTCD). I zipped his database and uploaded at my site for your information. You can download the database at Construct the traffic sign database. Please be advised that Moeur had placed a copyright rule limited to recreational or study purposes.

Within those numerous signs (336 samples), you can see all types of warning signs at one place – Select a number of sign candidates for a priori algorithm testing. Now it is easy to understand that our problem is not limited to predefined structured pattern recognition but OCR. It should be more discussed whether we should treat the characters located inside the warning frames as computer-generated fonts or just special line features. Anyway, I also found fonts database that can be used to generate the traffic sings as we want (Check http://www.triskele.com/fonts/). This font set may be used for OCR training.

In addition, several neural network computing packages for the general purposes are also reviewed and tested – Neural Network. However, it should be noted that there is a possibility that we may need to devise a specially designed neural network, which means that we should developed the neural computing code by ourselves.

Besides, I am collecting various information related with traffic signs and this TSPR (Traffic Sign Pattern Recognition) project Wiki is rapidly growing up. Please take a look the main page and follows links in there. If you have any questions or suggestions, just send me an email or you can directly edit it. This Wiki engine provides the document versioning system that can provide complete document protection from any modification when it is necessary to recover.

This week, I wish to discuss with members on several points including:

  • Various types of traffic signs.
  • Classify the research strategies in a number of steps.
  • Discuss the goal and the scope of this project more specifically.


A new image pattern template correlation measurer is introduced. This is named the feature correlation graph (FCG). One of the forcing factors that we need this visual measurer is because we are handling in somewhat higher level of disparate traffic signs. So even when we choose a superior image abstraction feature, we are not sure whether this feature will divide the whole target traffic signs clearly. For this, I developed the new visual correlation measure, FCG, here and compared several image abstraction features. As a result, FCG provides a holistic view of the performance of the abstracted image features in one sight and in addition, it helps users understand the level of target patterns complexity intuitively. Details can be found at Choose proper image abstractions or features for sign recognition.


This week’s work is focused on finding a proper ANN module for traffic sign recognition. For this, I reviewed several open ANN packages and finally compared the performance of two candidates: AForge ANN Engine and FANN. As a result, FANN shows superior performance compared to AForge in terms of its computing speed and even trained results. I generated the line receptor, which was introduced in Neural Network OCR project for the font recognition manually. Detail results and analysis are described at Develop traffic sign recognition modules.


This week, I spent some times for the traffic sign extraction from the natural scene capture. This is to classify the foreground and background area of a given traffic sign image. Then we can apply the line receptors only to the foreground areas of the traffic sign for the recognition and before that, training.

Based on Dr. Wang’s traffic sign detection demonstration that he showed us last week, I think the the sign detection result I will get will be in the rectangle 2-D image format. But the actual traffic sign are in various shapes including triangle, rectangle, pentagon and octagon and I expect the result that I will get will include the area that is not of a traffic sign. So I implemented the closed convex polygon detection algorithms to extract the correct boundary of the traffic sign if it exists in. Intermediate experiment results that are applied to natural scene captures available at Develop traffic sign recognition modules. It comes with brief explanations and source codes.


This week, I am developing ANN algorithms using line receptors for the image extracted from the polygon detector that I presented last week. To see its progress, please check the TSPR Recetpor Recognition modules. I expect to show the first preliminary result in a soon.

Besides, I am still wondering whether you are interested in using Codejock Windows interface libraries. However, since I did not get any reply on this matter, I completely redesigned the Polygon detector user interface to remove the Codejock library dependency. The newly revised Polygon detector is available at Download.

My libraries are getting bigger and diverse. So I started using Doxygen (Thanks Dr. Wang) to create the source document. Please check: http://www.pilhokim.com/project/signpattern/signwiki/doxygen/tspr/hierarchy.html http://www.pilhokim.com/project/signpattern/signwiki/doxygen/polygondetector/classCPolygonDetectorDlg.html

Currently, I am developing libraries in two folds: computation only DLLs and user interfaces. TSPR dll is for the computation and all other user interface libraries including PolygonDetector will provide the user interface on top of tspr.dll.

Anyway, as I mentioned before, I am using the OpenCV library and FANN neural computation modules and you need both to compile my codes. The file size including my codes, libraries, and traffic sign database is about 550 MB right now. I will remove some redundant data and intermediate object files to be ready for compilation and submit those to Dr. Wang so that he can play with codes easily.


This week, I designed the preliminary algorithms for ANN training and recognition. I only uploaded pictures here: Traffic sign pattern ANN training algorithm and Traffic sign pattern recognition algorithm.

The first preliminary binarization method is implemented. It uses the dominant color in the sign pattern as its background (0) and other colors as its foreground (1). The result is shown at the following link with the description on the limitation of this binarization method. Extension for the traffic sign recognition

I will first perform the ANN training and recognition results using this bianarized data set and will further extend its categorization method using more features like its boundary shape and more color information.

For following weeks, I will develop the program based on the above algorithms and the early results will be shown in Extension for the traffic sign recognition.

In addition, I summarized the PolygonDetector algorithm for future reference at: Traffic sign feature extraction


After all with many trials and errors, I implemented the line receptor algorithms including line receptor creation, duplication removal, and filtering using entropy maximization algorithms. There were several logical errors in the sample codes, which I found at the web, and it made me take longer time for implementation. My algorithms are faster and more accurate than the sample code. Anyway, I did immediate experiments to the small set of traffic signs first. The result and analysis is summarized at here: Extension for the traffic sign recognition#Small set test and analysis

With the small set of traffic signs, our algorithm works pretty well with zero error rate, while showing fast learning convergence.

However, we will see for following weeks whether our algorithms will work well for the full set (328 types of traffic signs) and , if necessary, further extend the functionality to employ more traffic sign features. This week, I had a conference call with Dr. Wang.


This week, I applied the line receptor algorithm for the full set of traffic sign pattern recognition. The ANN training results are summarized at Extension for the traffic sign recognition. Although it take much longer time (571 seconds) compared to the small set (0.28 sec) for training at my laptop (P4 2.66GHz 512MB), this is quite acceptable in my opinion.

One thing that is deviated from my expectation is that the training result is better than I think that would be. This ANN is trained very well still showing zero error for whole 325 input patterns. However, input patterns used in this experiment are computer generated synthetic images having very clear colors and boundaries. So it is time to move to real traffic signs captured on the road for the ANN testing and if necessary, more training.


This week, I am handling several things at the same time to keep developing the algorithm and while searching over related works.

First, traffic sign recognition user interface namded Traffic Sign Recognizer (TSR) is under development since we now have the trainned ANN and need a interface to test its performance. The new GUI for traffic sign recognition is being built on top of the polygon detection algorithm that I introduced at Traffic sign feature extraction. Evidently, my polygon detection algorithm is not the perfect solution so I am adding the manual region selection function. You can see the screen shot showing the manual region assigning procedure to extract the ROI from the real scene image: Develop traffic sign recognition modules

For following weeks, TSR will be extended to have several additional features:

  1. Extract the region of ROI within the bounded rectangle
  2. Compute the line receptor states from the captured natural scene
  3. Run ANN recognition
  4. Display the recognized traffic sign with related information, ex. sign ID, meaning, class and type

Besides, last week I used FANN, which is the ANN engine we are currently using, as the stand-alone application and so did not realize the big problem that I found this week. Their main library is developed in C language. So we need a function exporting definition header file to import their libraries into our application. But there were several problems in their header definition and also in a number of source codes to be compiled together with other MFC applications. So it took some time to modify their source codes and finally it is working very well. So now we can compile the FANN library with any program developed in Visual C++ 6.0. You can download the fixed FANN engine (~12 MB) here: Selecting a proper ANN module#FANN bug fix

I did some survey on ANN training that Prof. Tsai has been concerned. You may remember that I once introduced about the adaptive training that add nodes and layers incrementally for the training. I did some survey and found that technology is now called as cascade training. I found some reference and will read and apply their technology in following weeks. I put a short introduction on cascaded training here: Extension for the traffic sign recognition#Adpative training to determine the proper number of layers and nodes

Finally, I read the Liu and Ran’s paper on the stop sign recognition that Prof. Tsai sent me. As summary, they detected the stop sign from the given image by using the color segmentation on HSV color space and after check the object width, aspect ratio and symmetrical level to filter out the stop sign region candidates. This extract ROI image after resizing to 30x30 pixels is directly put into the ANN configured 30x30 (900) input nodes, 1 hidden layer with 6 hidden neurons, and 2 output neurons indicating “stop sign” or “not”. Their training curve shows the typical back propagation curvature showing the convergence. Their over whole experiment is simple and not creative compared to other works.

So I am looking and reading more references and collecting those at here: Traffic sign recognition papers

I will post reviews of each paper to Reviews on TSPR related projects.


This week, the traffic sign extraction algorithm is implemented to get a trafic sign from polygons detected automatically or manually. Please see Traffic sign exatraction. For following weeks, I will finally the above output to the neural network to get the recognition results.

Besides, I did some reviews on neural networks and found several coursework materials. Those are summarized with short reviews at Neural Network.


This week, I developed the the traffic sign recognition (TSR) interface which is finally working as shown in Traffic sign recognition test.

TSR interface has a number of features: (1) Automatic and manual polygon detection and extraction algorithms are embedded. (2) Decompose colors into traffic sign colors (8 colors). (3) Binarization using the dominant color (4) Run the TSR neural network for the recognition (5) Display the recognition result with the reliability and the database traffic sign image.

Above features exactly implement the algorithm illustrated at Traffic sign pattern recognition algorithm.

This means that we now finished the first round of development to see the preliminary performance of our algorithms. To further extend and enhance algorithms, I got a traffic sign sample images from Dr. Wang last week. So from now on, I will work on enhancing the recognition performance by analyzing the real traffic scenes.

Besides, I read a number of papers 1 and add reviews at Reviews on TSPR related projects. Within those, I strongly recommend you to read two papers. This will help us understand the exact problem sets in the traffic sign recognition.

Reviews on the road sign recognition compiled up to 1999 works 2

Reviews on the road sign recognition compiled up to 1995 works 3


This week, I am continuing testing the recognition module to improve its recognition capability for the real traffic scene. The preliminary analysis and future analysis topics are summarized at Result analysis for the further improvement.

Besides, yesterday I picked the GPS sensor and developed a library to read a data from the GPS sensor using the serial communication. I think that this library may be helpful to the group. So I opened it at GPS reader.


This week, I reviewed and studied on the color space to find the proper one for the color decomposition for the traffic sign content analysis. This is based on the analysis result done last week (Please read Result analysis for the further improvement) that the entropy-maximized line receptors directly applied to the raw traffic signs are not good enough in classification.

In traffic signs, colors whether used in the background or legends are very important in classifying the traffic sign categories. However, existing approaches are just mentioning that they used a specific color space without justified testing results applied to the real traffic sign scenes. I think that this experiment should be done and very important to find the proper color space specifically for traffic sign color decomposition or extraction from the real scene. I also found that there was some error in my previous color decomposition approach that the color information supplied at 1 is not matched with the real color using in printing the traffic signs. Hence, for following weeks, I will do survey and analysis on this topic and intermediate progress are summarized at here: Choosing a proper color space


By using the CodeJock Xtreme library, I am developing the MUTCD traffic sign coder software (Current screen copy is available here but this is not fully implemented yet. Specifically MUTCD code select window will be attached on the right side of the screen: Download.

Regarding the MUTCD code select interface, there was one problem in the MUTCD code information that I collected before. While I was summarizing the MUTCD code to construct the MUTCD code select interface for the software, I found that there are many missing codes because they do not exist at the web site where I automatically collected MUTCD sign sample images.

So I am reading through whole MUTCD document again and decided manually extracting images directly from their document. This new MUTCD code database will have enriched information with more details on each traffic signs like its shape, background color, legend color, etc. It will take a few days to rebuild the new MUTCD code database and after will link those to the software to display sign information.


It took very long time to reconstruct the MUTCD code database. Please see Construct the traffic sign database. However, we now have much more complete set of codes with very valuable contexts of each sign like its shape, colors, and even text marked in the traffic sign. I just finished the code database user interface implementation. To see the progress of the MUTCD Traffic Sign coder and what I am doing or struggle to solve, please see MUTCD Traffic Sign Coder.

To start the manual sign tagging, I still need to do several more things as specified in the To-do list in the above page. It can’t be done today and I am sorry for this. But I will try to make it happen this week. Again, please read MUTCD Traffic Sign Coder and send me any opinion if available.


I prepared the report on the MUTCD codes statistics based on their appearance features at Construct the traffic sign database. Please look at Construct the traffic sign database. I hope this analysis help us reduce the problem space in recognizing the traffic sign and find the exact target of the neural network training.

In addition, the screen shots of MUTCD Traffic Sign Coder is updated. Its current status and future To-do list are also updated together.


Thanks for your patience. Messed up things during moving are finally getting into the right place. I will have a full Internet connection at my home this Saturday.

Let me catch up the work. I downloaded Ibrahima’s most recent works today (Thank you, Ibrahima!) and prepared its statistics. Please check Construct the traffic sign database.

During our last meeting, we mentioned to choose the small set of signs for the line receptor recognition based on its correlation graph. However, I found the signs that line receptors show more distinctive features are mostly limited to special cases embedding texts inside. Since we will not challenge OCR at the first state of recognition, limiting the target traffic signs for such categories do not make a much sense right now (but will be challenged later). So let us first collect data in a desultory or aggressive way and after do post-filtering based on the ANN performance of each sign type.

Besides, I updated MUTCD Traffic Sign Coder program to have:

  1. Initialize button to set keywords and options in the Search MUTCD codebook window to its initial status.
  2. Now a user can move between options in the Search MUTCD codebook using the TAB key to move down and Shift-TAB to move back.
  3. It is possible now to embed the database within the program. This means that we do not need to install MySQL database separately. I embedded the MySQL source codes into the MUTCD Traffic Sign Coder program. However, I will check more its stability and it is necessary to develop the data query interface. So it is not enabled yet.

The new screen shot is available at MUTCD Traffic Sign Coder.

Ibrahima, please find the attached zip file to download the new program. You can just uncompress files into the existing directory. But PLEASE backup all of the existing duplicated execution files (exe and dlls) before you overwrite on those. It is VERY IMPORTANT to do back up files before install any updates since there could be a problem that I did not expect to happen. Do backup execution files first and after overwrite the zipped files on above on it. If the new problem does not work and display any error messages, please send me the details about those. If the zipped file is not encoded properly, then the same version is available at Download. By the way, Ibrahima, please complain any things freely to improve this software.

From now on, I will first start preparing the training data set for line receptor status vector computation for ANN training. It will be focused on 4 traffic signs (R2-1, R1-1, W1-2 and W13-1) which each has at least more than 100 samples.


This week I focused on analyzing the MUTCD color distribution for each MUTCD color. This is necessary first to binarize the traffic sign to apply the line receptor algorithm. Since we are now handling the real traffic sign, we have to know the color distribution of the background colors to do the binarization correctly to improve the line receptor recognition. The method selected to collect the color distribution (i.e. the actual RGB values of each one specific color in the real traffic sign images) is selecting the color region in the traffic sign image manually using the picture tool and using those as the mask to extract the actual RGB values from the input traffic sign image. Current statics up to 09/15/2006 is following. Please see Choosing a proper color space also.

  • Manually color-taged files: 801
  • MUTCD codes: R1-1, R1-2, R2-1
  • Tagged colors (counts), (distinct RGB values):
    • White (598583), (87252)
    • Red (196528), (67804)
    • Black (47085), (16126)

Besides, Ibrahima collected more traffic signs and we updated the database. Please see Construct the traffic sign database to see the most recent database statistics.


My today presentation will be based on the attached PowerPoint file. It includes the rough summary of the current project progress but will be enough to tell you the current status. You are highly encouraged to ask any questions during the meeting for the future progress. Besides, please keep the attached material confidential up to the formal publication.



This week, I extended the data set for the traffic sign recognition. Our recognition network now recognize 33 types of traffic signs in 91.64% success ratio. At this time, only brief summary is available in the attached file. The details will be written next week.



I just finished the ANN training and testing including newly added data set. Now our ANN can recognize 62 different types of signs in 82.98% success ratio. As expected, the recognition ratio drops compared to the last week result which could recognize the 33 types of traffic codes in 91.14% ratio.

As we extend the target traffic signs to recognize, the recognition ratio will drop more. For instance, some of newly added target sign sets this week has very similar texture and features with existing ones except the background color.

So it becomes very clear to me that we need to utilize more image features to prevent the recognition ratio from being dropped with getting bigger target signs to recognize. I am now modifying the MUTCD pseudo-color mapping algorithm to consider its neighborhood pixels to find the best matching color and to become the closed region. I wish that this will help us locate the sign from the raw image and improve the color-coded line receptor algorithm.

The attached Excel file includes two sheets of this week result and last week result.

The output of ANN is ordered by its confidence level to recommend a number of candidates for the sign to recognize. The summary of its hit ratio with now multiple candidates to total 1187 samples is:

  • 1 candidate 985.00 82.98%
  • 5 candidates 1066.00 89.81%
  • 10 candidates 1089.00 91.74%
  • 20 candidates 1120.00 94.36%
  • 57 candidates 1187.00 100.00%
  • 62 candidates 1187.00 100.00%

Please see the updated Excel file for the details. It has now two more sheets including the chart showing the recognition ratio with the increasing number of candidates and ordered ANN outputs. Regarding limitation, let us talk during the meeting hour.


I prepared the shot report on ICIP 2006 held at here, Atlanta, for last 4 days. I wish my report gives you the brief glimpse on our research related fields. Though I tried to make it compact but turned into a bit lengthy report: TSPR:ICIP 2006 related works review.

p.s. If you want to print out my report, please remember that my site always has “Printable version” in the left “toolbox” menu.