Deep Learning Approach for Radar-based People Counting
SCIE
SCOPUS
- Title
- Deep Learning Approach for Radar-based People Counting
- Authors
- CHOI, JAE HO; KIM, JI EUN; KIM, KYUNG TAE
- Date Issued
- 2022-05
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Abstract
- With the development of deep learning (DL) frameworks in the field of pattern recognition, DL-based algorithms have outperformed handcrafted feature (HF)-based ones in various applications. However, there still exist several challenges in applying the DL framework to a radar-based people counting (RPC) task: The powerful representation capacity of a deep neural network (DNN) learns not only the desired human-induced components but also unwanted nuisance factors, and available data for RPC is usually insufficient to train a huge-sized DNN, leading to an increased possibility of overfitting. To tackle this problem, we propose novel solutions for the successful application of the DL framework to the RPC task from various perspectives. First, we newly formulate the preprocessing pipelines to transform the raw received radar echoes into a better-matched form for a DNN. Second, we devise a novel backbone architecture that reflects the spatiotemporal characteristics of the radar signals, while relieving the burden on training through a parameter efficient design. Finally, an unsupervised pre-training process and a newly defined loss function are proposed for further stabilized network convergence. Several experimental results using real measured data show that the proposed scheme enables an effective utilization of DL for RPC, achieving a significant performance improvement compared to conventional RPC methods.
- URI
- https://oasis.postech.ac.kr/handle/2014.oak/110525
- DOI
- 10.1109/JIOT.2021.3113671
- ISSN
- 2327-4662
- Article Type
- Article
- Citation
- IEEE Internet of Things Journal, vol. 9, no. 10, page. 7715 - 7730, 2022-05
- Files in This Item:
- There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.