Open Access System for Information Sharing

Login Library


Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Monte-Carlo-based Leakage Analysis on a GPGPU System

Monte-Carlo-based Leakage Analysis on a GPGPU System
Date Issued
Along with the aggressive scaling down of semiconductor process technology, the leakage power of VLSI designs is continuously growing, and hence the accurate leakage analysis is getting important. Additionally, as the process parameter variation is increasing, the necessity of variation-aware leakage analysis is being emphasized. The representative approach to variation-aware leakage analysis is the Monte-Carlo-based (MC-based) method, which provides accurate results. In this method, the process parameters are modeled as random variables, and an arbitrary distribution of parameters is allowed. In the typical cases, the values of parameters have Gaussian distribution and they are randomly generated using the given values of mean and standard deviation. Although the MC-based leakage estimation is accurate, it is almost impractical to use for the state-of-the-art VLSI designs due to the high computational complexity. To overcome the high complexity problem of the MC-based approach, several analytic and statistical leakage estimation (ASLE) methods have been proposed. These ASLE methods approximate the distribution of the leakage current of a logic gate as a lognormal distribution. The ASLE methods assume that the sum of two lognormal distributions also has the appearance of a lognormal distribution. This assumption leads to convenient estimation but it also leads to inaccurate simulation results. In addition, currently, only the first-order exponential-polynomial model is available to represent the leakage current. This is simple but tends to provide inaccurate results in many cases. Some of the recent graphic processing units (GPUs) support parallel processing by generating a large number of threads that execute the same operation simultaneously. All simulations of the MC-based leakage analysis are independent operations. Thus, it can be speeded up using the parallel processing capability of a GPU, and any leakage model can be used by taking the benefits of the MC-based approach. This thesis presents two leakage models that are based on piecewise polynomial interpolation. They are the piecewise linear (PWL) interpolation-based model and the cubic spline (CS) interpolation-based model. The proposed models provide more accurate results than the existing one, the first-order exponential-polynomial model. In this thesis, the implementation of the MC-based leakage estimation on a GPU is presented. In the proposed method, each MC simulation run is assigned to a thread, and the leakage current of the system is estimated by adding the leakage values of all logic gates in the system. When a thread is executed, the effective use of the global and shared memories in a GPU is required in order to maximize simulation efficiency. This thesis presents a method to use the memories of a GPU effectively for storing and transferring the library data. Two experiments were performed to examine the performance of proposed methods. One is to evaluate the accuracy of the proposed leakage current modeling method. The other experiment is to demonstrate the efficiency and accuracy of the proposed leakage estimation method, implemented on a GPU. This experiment was performed using the ISCAS-85 benchmark circuits. As the hardware environments, the CPU of Intel i7 870 with 4 physical cores and the GPU of NVIDIA GeForce GTX580 with 512 physical cores were used. In the experiment to evaluate the accuracy of proposed leakage model, the PWL and the CS interpolation-based leakage models exhibited less than 5% and 1% of average error, respectively, comparing to SPICE simulation. On the other hand, the first-order exponential-polynomial model illustrated more than 70% of average error. Finally, in the experiment to evaluate the performance of leakage analysis, the proposed MC-based leakage estimation method with proposed leakage models, implemented on a GPU, reduced the simulation time to up to 0.09% and showed 1.96% of maximum average error, when compared to MC-based leakage analysis with HSPICE simulation on a CPU.
Article Type
Files in This Item:
There are no files associated with this item.


  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads