Hochschullehrer:
Student:
  • Boxuan Wu
Datei:
Beginn:
  • 14.11.2018
Ende:
  • 12.12.2018
Convolutional Neural Networks (CNNs) represent the best tool for classification of image content at present. Over the past few years, this area of research brought great progress to image classification. One of the most significant breakthroughs in the beginning is the reliant classification of handwritten postal zip numbers and later the recognition of faces or license plates. Current state of the art applications use powerful real-time-capable networks that are able to detect multiple classes in images for detecting pedestrians, vehicles, obstacles and traffic signs in real-time. A CNN is rated by its overall ability to classify its input. Common CNNs use JPEG-compressed images with RGB-values as an input, but PNG and BMP are also widely spread. However, different color representations enhance different characteristics of the underlying image data. Benefits from using different color spaces might increase the overall classification performance of convolutional neural networks. Mr. Wu is presented with the task of acquiring and arranging datasets that are represented in different color spaces, such as RGB, YUV, YCbCr, HSV and HSL. The datasets should also comprise training and test images, where illumination of similarly-colored objects are different. These datasets should then be used to train a state of the art CNN and compare its classification results. Depending on the results, several preprocessing methods such as histogram equalization should then be applied and tested. The results should give a good understanding, which color space representation and preprocessing methods can help increasing the performance of convolutional neural networks. The theory on why different color spaces yield different results should also be presented. The current state of the art shall be determined by conducting a literature research. The thesis includes a well-documented presentation of the results; any source code created shall include sufficient annotation.