Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

KVM / ARM 환경에서 GPU 가상화를 위한 Split Device Driver 모델

Title
KVM / ARM 환경에서 GPU 가상화를 위한 Split Device Driver 모델
Authors
박병수
Date Issued
2015
Publisher
포항공과대학교
Abstract
Mobile virtualization is currently a hot issue, because of growing security concern in Bring Your Own Device (BYOD) environment. Most workers use their mobile devices for work without any security policy enforced. Therefore, company secrets can be leaked by malignant code in a worker’s mobile devices, so the company want to separate the business environment from the private usage environment. Virtualization is one of the promising solutions to separate environments logically. It is quite challenging to virtualize GPU device in mobile platform because of its proprietary device driving architecture. GPU virtualization is essential in mobile virtualization, because GPU conducts core roles such as fast UI and web flash service. There are two types of device virtualization techniques: full virtualization and para-virtualization. In this thesis, GPU device is virtualized by para-virtualization. The para-virtualized GPU driver consists of the frontend driver in guest domain and the backend driver in host domain. This kind of device driver is called Split device driver. According to communication methods between the frontend driver and the backend driver, three different device driver models are designed and implemented. Firstly, Model 1 uses hypercall-based communication. Model 2 uses host polling to receive requests (e.g. file operations) from the guest domain and virtual interrupt to send results back to the guest domain. Model 3 uses polling-based communication for both guest and host domains. It is shown from experimental evaluations that Model 3 is the best solution for communication between guest and host when a guest VM runs only one GPU benchmark or a lightweight GPU benchmark with CPU-intensive benchmarks. Model 2 is the best solution when a VM runs a heavy GPU benchmark with CPU-intensive benchmarks or a GPU benchmark with latency-intensive job. Model 2 is also best solution from a point of CPU-intensive benchmarks. Model 1 is not bad when VM runs only one GPU benchmark, because a VM stop running a vCPU when running a hypercall. That is, Model 1 is not a choice when the VM runs several benchmarks and there is/are GPU benchmark(s) among them. In summary, when it comes to performance, Model 3 shows the best result. However it results in worst background jobs performance because it consumes lots of CPU cycles due to the polling mechanism. In contrast, Model 2 gives the best results for the background jobs because it does not require additional CPU cycles. To take advantage of both approaches, we have plan to combine the polling approach with the H/W based low-overhead communication mechanism such as IPI.
URI
http://postech.dcollection.net/jsp/common/DcLoOrgPer.jsp?sItemId=000001921544
https://oasis.postech.ac.kr/handle/2014.oak/93492
Article Type
Thesis
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse