Open Access System for Information Sharing

Login Library

 

Article
Cited 110 time in webofscience Cited 133 time in scopus
Metadata Downloads

Robust human activity recognition from depth video using spatiotemporal multi-fused features

Title
Robust human activity recognition from depth video using spatiotemporal multi-fused features
Authors
Jalal, AKim, YHKim, YJKamal, SKim, D
Date Issued
Jan-2017
Publisher
ELSEVIER SCI LTD
Abstract
The recently developed depth imaging technologies have provided new directions for human activity recognition (HAR) without attaching optical markers or any other motion sensors to human body parts. In this paper, we propose novel multi-fused features for online human activity recognition (HAR) system that recognizes human activities from continuous sequences of depth map. The proposed online HAR system segments human depth silhouettes using temporal human motion information as well as it obtains human skeleton joints using spatiotemporal human body information. Then, it extracts the spatiotemporal multi-fused features that concatenate four skeleton joint features and one body shape feature. Skeleton joint features include the torso-based distance feature (DT), the key joint-based distance feature (DK), the spatiotemporal magnitude feature (M) and the spatiotemporal directional angle feature (theta). The body shape feature called HOG-DDS represents the projections of the depth differential silhouettes (DDS) between two consecutive frames onto three orthogonal planes by the histogram of oriented gradients (HOG) format. The size of the proposed spatiotemporal multi-fused feature is reduced by a code vector in the code book which is generated by vector quantization method. Then, it trains the hidden Markov model (HMM) with the code vectors of the multi-fused features and recognizes the segmented human activity by the forward spotting scheme using the trained HMM-based human activity classifiers. The experimental results on three challenging depth video datasets such as IM-Daily-DepthActivity, MSRAction3D and MSRDailyActivity3D demonstrate that the proposed online HAR method using the proposed multi-fused features outperforms the state-of-the-art HAR methods in terms of recognition accuracy.
URI
http://oasis.postech.ac.kr/handle/2014.oak/37196
ISSN
0031-3203
Article Type
Article
Citation
PATTERN RECOGNITION, vol. 61, page. 295 - 308, 2017-01
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

김대진KIM, DAI JIN
Dept of Computer Science & Enginrg
Read more

Views & Downloads

Browse