Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

On the Neural Network Solutions of PDEs

Title
On the Neural Network Solutions of PDEs
Authors
손휘재
Date Issued
2021
Publisher
포항공과대학교
Abstract
This dissertation is about the neural network solutions of partial differential equations (PDEs). First, we study the forward-inverse problems of parametric PDEs via neural networks. We construct approximated solutions of PDEs using the Deep Neural Network (DNN) and propose an architecture that includes the process of finding model parameters via a supervised dataset, the inverse problem. That is, we provide a unified framework of training a DNN that approximates an analytic solution and its model parameters simultaneously. The architecture consists of a feed-forward DNN with non-linear activation functions, automatic differentiation, reduction of order, and gradient-based optimization method. We also prove theoretically that the proposed DNN solution converges to an analytic solution in a suitable function space for fundamental PDEs. Finally, we perform numerical experiments to validate the robustness of our simplistic DNN architecture for the 1-D transport equation, 2-D heat equation, 2-D wave equation, and the Lotka-Volterra system. Second, we introduce \textit{Sobolev Training} for the neural network solutions of PDEs. We introduce a novel loss function for the training of neural networks to find the solutions of PDEs, making the training substantially efficient. Inspired by the recent studies that incorporate derivative information for the training of neural networks, we develop a loss function that guides a neural network to reduce the error in the corresponding Sobolev space. Surprisingly, a simple modification of the loss function can make the training process similar to \textit{Sobolev Training} although solving PDEs with neural networks is not a fully supervised learning task. We provide several theoretical justifications for such an approach for the viscous Burgers equation and the kinetic Fokker--Planck equation. We also present several simulation results, which show that compared with the traditional $L^2$ loss function, the proposed loss function guides the neural network to a significantly faster convergence. Moreover, we provide empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high dimensional PDEs.
URI
http://postech.dcollection.net/common/orgView/200000366998
https://oasis.postech.ac.kr/handle/2014.oak/111476
Article Type
Thesis
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse