Area II - Machine Vision

Recognition technology using camera image has wide range of application. It can be used anywhere with a camera, and there is a lot of information in the image. There are several algorithms such as classification, detection and segmentation, and they can be used as various uses such as recognition, detection of face, gesture, object and so on. In the past, we extracted the feature from the image and learned the algorithm to classify or analyze the extracted feature to obtain the desired result. But Deep Learning, which has been popular in recent years, has shown good performance by including features extraction process in learning process. Our laboratory is developing an image recognition algorithm that is optimized for road conditions and indoor robots.

Detection using Deep-Learning

Recently, Deep Learning has shown unmatched performance in various kinds of field. Among the deep learning algorithms, Convolutional Neural Networks (CNN) is an image-optimized algorithm that can be used to perform various Machine Vision Tasks. Detecting the type and location of an object in an image is called detection. One of a typical detection algorithm using CNN is Faster Regions with CNN. This algorithm performs detection by finding the position of an object using CNN and detecting the type of object based on the detected position. Our laboratory is optimizing the detection system using CNN to fit the driving situation.

- Vehicle detection using Deep-Learning -

Segmentation using Deep-Learning

Segmentation, which determines an object on a pixel basis is difficult to perform and has a wide range of applications in proportion to the difficulty. Recently with the introduction of deep-running algorithms, segmentation also can achieve high performance. One of a typical segmentation algorithm using CNN is Fully Convolutional Networks. In our laboratory, we are developing a system that can distinguish objects on a pixel-by-pixel basis and determine roads and obstacles by using deep-running segmentation and fusion of existing segmentation algorithms and CNN.

- Indoor segmentation using Deep-Learning -
(Blue: ground, Red: pedestrian, Green: wall)

Pedestrian Detection and Tracking

Detecting objects in images and videos is one of the fundamental tasks of pattern recognition and computer vision. Pedestrian detection is regarded as one of the most difficult problems in object detection due to various appearances and pose of human body. Extracting effective features and developing powerful learning classifier have been used in research of pedestrian detection. Haar-like wavelet and HOG are effective features for pedestrian detection and used with learning classifiers such as SVM and Adaboost.

- Example of pedestrian detection -

Vehicle Detection and Tracking

To develop automotive driver assistance systems which prevent driver from possible collision, robust vehicle detection is the first step. Vehicle detection is very challenging due to variety in shape, size and color. Using Gabor filters for extracting vehicle feature is one of the effective methods for vehicle detection because it provides a mechanism for obtaining orientation and scale tunable edge. Classification is performed using SVM.

- Example of vehicle detection -

On Road Object Detection System using Sensor Fusion

Sensors such as a camera or a radar are used to detect objects. Vision-based object detection requires high computation but it can use features with a lot of information. Radar-based object detection takes less time and produces inaccurate results than using camera. Many research focuses on the fusion of two sensors to detect objects. There are three methods for sensor fusion. In the high level fusion, the final detection result is produced by combining the results from each sensor. In the intermediate level fusion, radar provides regions of interest and then image is used to extract features and classify them. In the low level fusion, both camera and radar are used to generate the regions of interest and to classify them.

- Comparison by fusion level -

Gait Recognition for Sensor Fusion

Biometrics has come to occupy an important role in human identification due to their uniqueness. Face recognition systems have good performance with frontal views at high resolution and good lighting conditions. Current iris recognition systems are designed to work when the subjects are placed at relatively close distances from the imaging system. While they have good performance in recognition, they are restricted to controlled environments or require cooperation of the subject.

A possible alternative one to resolve this problem is gait or the style of walking of an individual. Medical studies haves shown that gait is a unique signature of humans, all the components considered. Gait, a non-intrusive biometric, can be captured by cameras placed at a distance. In particular, it might even be attempted in night-time conditions using infrared camera. In addition to, there are many potential application areas, such as visual surveillance, access control, human identification, and etc.

- Examples of the silhouette images in a gait cycle -

The aim of this work is to synthesize side views of high quality from other views (including frontal view), to extract new features for gait biometric, and to fuse gait biometric with other biometric such as face, sound, etc. We expect that the gait recognition become a prerequisite step for the multimodal biometric system.


Copyright(c) 2006. Computational Intelligence Lab.
All rights Reserved.
TEL: +82-2123-2863 / E-mail:

Send an E-mail Go E3 Go Yonsei