Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks

Title
Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks
Authors
RHU, MINSOOO'CONNOR MIKECHATTERJEE, NILADRISHPOOL, JEFFKWON, YOUNGEUNKECKLER, STEPHEN
Date Issued
2018-02-26
Publisher
IEEE
Abstract
Popular deep learning frameworks require users to fine-tune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory. Prior work tries to address this restriction by virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be utilized for memory allocations. Despite its merits, virtualizing memory can incur significant performance overheads when the time needed to copy data back and forth from CPU memory is higher than the latency to perform DNN computations. We introduce a high-performance virtualization strategy based on a "compressing DMA engine" (cDMA) that drastically reduces the size of the data structures that are targeted for CPU-side allocations. The cDMA engine offers an average 2.6x (maximum 13.8x) compression ratio by exploiting the sparsity inherent in offloaded data, improving the performance of virtualized DNNs by an average 53% (maximum 79%) when evaluated on an NVIDIA Titan Xp.
URI
https://oasis.postech.ac.kr/handle/2014.oak/42572
ISSN
2378-203X
Article Type
Conference
Citation
IEEE International Symposium on High-Performance Computer Architecture, page. 78 - 91, 2018-02-26
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

유민수RHU, MINSOO
Dept of Computer Science & Enginrg
Read more

Views & Downloads

Browse