Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Distributed Learning over Adaptive Networks with Techniques for Reducing Communication Overhead

Title
Distributed Learning over Adaptive Networks with Techniques for Reducing Communication Overhead
Authors
이재우
Date Issued
2016
Publisher
포항공과대학교
Abstract
We study diffusion strategies over adaptive networks for distributed estimation in which every node aims to estimate a common vector parameter. Diffusion strategies enable all nodes in a network to converge close to the global optimum w^o through only local interactions; by continuous cooperation with the neighbor nodes, each node achieve the global performance level despite the localized nature. However, the convergence performance of the diffusion algorithms are still inferior to that of the global (centralized) algorithms. For improving the convergence performance of the diffusion algorithms, many existing works focused on optimizing or adaptively adjusting the combination weights by which the network achieves the spatial diversity more efficiently. On the other hand, an objective of this thesis is to use the temporal information additionally as well as the spatial information for improving the convergence performance further. For this purpose, we present novel diffusion algorithms that endow all nodes with both spatial cooperation abilities and temporal processing abilities. We allow each node to share information locally with its neighbor as well as to use past data to improve estimation accuracy. In this manner, the resulting spatio-temporal diffusion LMS algorithms consist of three stages: adaptation, spatial combination, and temporal processing. Another interest of this thesis is the communication overhead. In the structure of the diffusion strategies, every node essentially needs to share processed data with predefined neighbors. The internode communication has certainly made a big contribution for improving the convergence performance, but it consequentially entails massive power consumption for data transmission. Hence, in developing low-power consumption diffusion strategies, it is very important to reduce the amount of communications without significant degradation of the convergence performance. For this objective, we provide two novel diffusion algorithms. First, we present a dynamic diffusion LMS algorithm that shares only reliable information with neighbors. We allow each node to evaluate the contribution of the new measurements for minimizing current mean-square deviation (MSD). In only case of decrease of MSD, the node is allowed to update and transmit its estimate to the neighbor nodes. Accordingly, the proposed algorithm has a reduced amount of communication while keeping the performance as much as possible. Next, we present a data-reserved periodic diffusion LMS algorithm in which every node updates and transmits the estimate periodically while reserving the measurement data at non-update time. By using the reserved data in adaptation step at update time, the proposed algorithm can minimize a decline in convergence speed that is one major drawback of original periodic schemes. Given period p, the total amount of communication is reduced to 1/p compared to the conventional ATC diffusion LMS. This leads naturally to slight increase in the steady state error as period $p$ increases due to the loss of combination steps, which we ascertained theoretically by the mathematical analysis. However, the proposed algorithm outperforms the other related algorithms.
URI
http://postech.dcollection.net/jsp/common/DcLoOrgPer.jsp?sItemId=000002227901
https://oasis.postech.ac.kr/handle/2014.oak/93238
Article Type
Thesis
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse