Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Vision-based Extraction and Integration of Driver’s Face and Forward Vehicle Environment for Intelligent Driver Assistance System

Title
Vision-based Extraction and Integration of Driver’s Face and Forward Vehicle Environment for Intelligent Driver Assistance System
Authors
최현철
Date Issued
2011
Publisher
포항공과대학교
Abstract
The purpose of driver assistance system (DAS) is to enhance the safety of a vehicle and to add convenience for the driver by providing useful information about the vehicle and traffic environment or by assisting with longitudinal and lateral control. Some DASs use vision techniques, i.e., lane and vehicle detection, driver face detection and recognition, and are close to being commercialized as the hardware and the software in automotive engineering have been improved. However, problems still exist in applying them to real driving situations because the information from the sensors about driver’s state and driving environment is occasionally inaccurate or inadequate. This limitation comes not only from limitations of the sensors but also from limitation on the information used in DAS. Another problem is that a huge amount of information from DAS overburdens the driver rather than providing the driver with a comfortable driving environment. Therefore, an adaptive driver safety system is needed to reduce bothersome information.Regarding to these two problems, this dissertation proposes an intelligent driver assistance system (IDAS) which can monitor both driver’s state and forward vehicle environment by using computer vision technique and reduce the useless warnings under driver’s attention. The proposed IDAS consists of a robust driver’s head monitoring system, a fast forward lane monitoring system, a fast forward vehicle monitoring system, and an simple warning scheme.As the first part, a robust driver’s head monitoring system by using active appearance model (AAM) is proposed. In conventional AAM tracking method, first order Taylor approximation of non-linear error metric is used to find optimal solution. However, in real driving situation, because driver’s head moves fast and largely, a small range of convergence of conventional AAM tracking cannot cope with it. To solve this problem, efficient second order Taylor approximation of error metric is proposed. In addition, a simple plane modeling of driver’s forward view and a simple geometry of heading direction estimation are used to recognize on which driver is paying attention. Improvement of convergence range of AAM, moderately fast computation, and accurate recognition of attention region was experimentally verified with real head image sequences.Because varying illumination, vertical vibration, and other challenging situation frequently happen during drive, it is not easy to estimate exact lane equations. Without loss of computational efficiency, Therefore, a simple lane candidate detection by template matching, a RANSAC + Kalman filtering scheme of lane tracking followed by a rule-based filtering of lane detection, and state transferring technique are introduced as a fast and robust algorithm for this challenging situations. The proposed lane detection and tracking algorithm was successfully tested on real driving image sequences captured in several times and several road conditions.The third part of the proposed IDAS is vehicle detection and tracking module. Positions and scales of forward vehicles in image coordinate are detected by using Adaboost classifier and refined by edge and shadow filter. After detection step, the detected vehicles are tracked to keep identity. Tracking based on gradient descent algorithm by using image gradient is one of the popular object tracking method. However, it easily fails to track when illumination changes. Although several illumination invariant features have been proposed, applying the invariant feature to the gradient descent method is not easy because the invariant feature is represented as a non-linear function of image pixel values and its Jacobian cannot be calculated in a closed-form. To make it possible, the generalized hyperplane approximation technique is introduced and applied to histogram of oriented gradient (HOG) feature, one of the well-known illumination invariant features. In addition, partial occlusion invariance is achieved by using image segments. The hyperplanes are calculated from training segment images obtained by perturbing the motion parameter around the target region. Then, it is used to map the difference in non-linear feature of image onto the increment of alignment parameters. This is mathematically same to the gradient descent method. The information from each segment is integrated by a simple weighted linear combination with confidence weights of segments. Compared to the previous tracking algorithms, the proposed tracking method shows very fast and stable tracking results in experiments on several image sequences.As the last part, warning scheme which can reduces useless warnings invoked even when driver is paying attention to the situation is proposed. A simple rule-based decision which consists of driver’s state and forward vehicle environment as the sufficient condition and type of warning as the necessary condition was verified to be effective to reduce useless warnings by experiments in real driving situation.
운전자 보조 시스템의 목적은 운전자에게 자동차와 교통 상황에 대한 유용한 정보를 제공하거나 운전자의 운전 제어를 도와줌으로써 운전자의 편의성과 안전 운전을 증대시키는 데에 목적이 있다. 차선 및 차량 검출이나 운전자 얼굴 검출 및 인식 등의 비전 기술을 이용한 운전자 보조 시스템은 하드웨어와 소프트웨어가 발달됨에 따라 상용화에 매우 근접해 있다. 그러나 운전자와 운전 환경을 감지하는 센서 정보가 부정확하고 불충분하기 때문에 실제 운전 환경에 적용에는 문제가 있다. 또한 운전자 보조 시스템으로부터 제공되는 방대한 양의 정보가 안전운전을 방해하는 요소가 되기도 한다. 따라서 운전자에 적응적인 안전 운전 시스템이 필요하다.이러한 문제에 대하여, 본 논문은 운전자 감시와 차량 전방 환경 감시를 모두 포함하는 지능형 운전자 보조 시스템을 제안한다. 제안된 지능형 운전자 보조 시스템은 네 부분으로 구성되어 있는데, 첫 번째는 운전자 얼굴 감시, 두 번째는 차선 추출 및 추적, 세 번째는 차량 추출 및 추적, 마지막으로 간단한 통합 경고 방법이다. 운전자 얼굴 감시는 Adaboost 검출과 Active Appearance Model (AAM) 이라는 통계적 얼굴 모델을 이용한 추적으로 구성된다. 기존의 AAM 추적 방법을 1차 테일러 근사에서 2차 테일러 근사를 사용하도록 하여 실시간 성능을 유지하면서 얼굴의 3차원 포즈를 측정하는 정확도를 높였다. 또한, 운전석 전면을 네 영역으로 나누고 운전자의 시선 방향과의 교점을 구하여서 운전자의 주의 집중 영역이 어디인지를 판단하였다. 차선 검출과 추적은 매우 간단한 템플릿 매칭과 RANSAC + 칼만 필터를 연동하여 여러 조명 환경과 주행 중 진동에 강인하며 매우 빠른 연산을 가능하도록 하였다. 차량 검출은 Adaboost 검출기와 에지 및 그림자 보정을 이용하여 빠른 검출을 가능하게 하였으며, Histogram of Oriented Gradient 특징의 hyperplane 을 학습하여 조명변화와 부분적인 가려짐에 강인하며 빠른 추적을 구현하였다. 마지막으로 불필요한 경고를 줄이기 위하여 운전자의 주의 집중 영역과 차량 전방의 상태를 모두 고려하는 규칙 기반의 효율적인 경고 방법을 제안하였다.
URI
http://postech.dcollection.net/jsp/common/DcLoOrgPer.jsp?sItemId=000000897355
https://oasis.postech.ac.kr/handle/2014.oak/1017
Article Type
Thesis
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse