Open Access System for Information Sharing

Login Library


Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Design of Normalized Least-Mean-Square Adaptive Filter Based on Stochastic Analysis

Design of Normalized Least-Mean-Square Adaptive Filter Based on Stochastic Analysis
Date Issued
Adaptive filters that self-adjust their transfer functions according to optimization algorithms are powerful adaptive systems with numerous applications in the fields of signal processing, communications, radar, sonar, seismology, navigation systems and biomedical engineering. This dissertation deals with a comprehensive study of theory and method about adaptive filters for system identification. Especially, it aims to deal with design of normalized least-mean-square (NLMS) adaptive filters against various environments based on the stochastic analysis of learning performance and performance enhancements. In this dissertation, following issues are addressed to design the NLMS adaptive filter. Noise Characteristics (Chapter 2 & Chapter 3) Chapter 2 proposes a bias-compensated NLMS (BC-NLMS) algorithm for adaptive filtering with noisy inputs. To compensate the bias caused by noisy inputs, an unbiasedness criterion is introduced. By solving the unbiasedness criterion, the BC-NLMS algorithm is derived. Since the BC-NLMS algorithm requires the input noise variance in the elimination process of the bias, the estimated input noise variance is used. However, the conventional methods of estimating the input noise variance might cause the instability for a specific situation. Furthermore, the steady-state error and the convergence rate of the BC-NLMS algorithm is not superior to the competing algorithms because of the fixed step size parameter. That is, the performance of the BC-NLMS algorithm is limited. Thus, to improve the learning performance of the BC-NLMS algorithm, it is stochastically analyzed by investigating the dynamics of both the mean deviation (MD) and the mean-square deviation (MSD) for the BC-NLMS algorithm. Based on the analysis, the stability-guaranteed estimation of the input noise variance and the adjustment of the step size are carried out to perform a stabilization as well as a performance enhancement in terms of the steady-state error and the convergence rate. Chapter 3 proposes an error-modified NLMS (EM-NLMS) algorithm. A saturation-type nonlinear function with the NLMS adaptation is introduced to improve the robustness against impulsive output noises. Furthermore, based on the mean-square error (MSE) analysis of the EM-NLMS algorithm, an adaptive selection of the threshold is proposed to adjust the saturation-type nonlinear function. Through the saturation-type error nonlinearity and the adaptive threshold selection, the EM-NLMS algorithm obtained the robustness against impulsive noises. Data Characteristics (Chapter 4 & Chapter 5) Chapter 4 proposes an oblique projection based NLMS (OP-NLMS) algorithm, which has the innovative update form of the NLMS algorithm for the case when the input signals are highly correlated. The OP-NLMS algorithm exhibits great robustness with respect to correlated inputs. Furthermore, the learning performance of the OP-NLMS algorithm is stochastically analyzed in terms of the MSD. From the MSD analysis, an optimal step size is derived to improve the learning performance. Chapter 5 proposes a performance-guaranteed dimensionality reduction method for the online estimation of block NLMS (B-NLMS) with large-scale data. Given large-scale data can be randomly sampled to reduce the computational complexity. However, reckless sampling of the large-scale data may cause the performance degradation because the size of the sampled data is directly related to the information entropy. Therefore, the size of the sampled data should be carefully adjusted by considering both the computational efficiency and the learning performance. To carefully adjust the size of the sampled data, the MSD and the MSE performance of the B-NLMS algorithm are elaborately analyzed. Based on the stochastic performance (with respect to the MSD and the MSE) analysis, a steady-state criterion is derived to determine whether the learning performance of the B-NLMS algorithm reaches its steady state or not. This criterion provides the performance-guaranteed adjusting method for the size of the sampled data by reducing the size of the sampled data in half when the learning performance is close to its steady-state value. Because the proposed method prevents the redundant computations in the steady state, it provides not only high computational efficiency but also fast convergence rate and small steady-state error. System Characteristics (Chapter 6 & Chapter 7) Chapter 6 proposes a fast and precise adaptive filtering algorithm for the online estimation under a non-negativity constraint. A variable step-size non-negative NLMS (NN-NLMS) algorithm based on the MSD analysis with a non-negativity constraint is derived. The NN-NLMS algorithm under the non-negativity constraint is derived by using the gradient descent of the given cost function and the fixed-point iteration method. Furthermore, the variable step size derived by minimizing the MSD yields the improvement of the learning performance in the aspects of the convergence rate and the steady-state estimation error. Chapter 7 proposes a diffusion NLMS (D-NLMS) algorithm for distributed network. For the adaptation step, the recursion of the MSD is derived instead of the exact MSD value and then the variable step size is obtained by minimizing it to achieve fast convergence rate and small steady-state error. For the diffusion step, the estimate of each node is constructed via the weighted sum of the intermediate estimates at its neighbor nodes, where the weights are designed by using a combination method based on the MSD at each node. This proposed MSD-based combination method provides effective weights by using the MSD at each node as a reliability indicator. Furthermore, to improve the computational efficiency, two distinguishing methods are proposed. In the adaptation step, an intermittent adaptation method that dynamically adjusts an update interval is proposed to reduce the redundant updates. In the diffusion step, a selection method is proposed to select the intermediate estimate of the most reliable node among its neighbor nodes for the estimate at each node.
Article Type
Files in This Item:
There are no files associated with this item.


  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads