Learning latent dynamics for partially observed chaotic systems

Type Article
Date 2020-10
Language English
Author(s) Ouala S.1, Nguyen D.1, Drumetz Lucas1, Chapron BertrandORCID2, Pascual A.3, Collard Fabrice4, Gaultier Lucile4, Fablet Ronan1
Affiliation(s) 1 : IMT Atlantique, UMR CNRS Lab-STICC, 29280 Plouzané, France
2 : Ifremer, LOPS, 29280 Plouzané, France
3 : IMEDEA, UIB-CSIC, 07190 Esporles, Spain
4 : OceanDataLab, 29280 Locmaria-Plouzané, France
Source Chaos (1054-1500) (AIP Publishing), 2020-10 , Vol. 30 , N. 10 , P. 103121 (17p.)
DOI 10.1063/5.0019309
WOS© Times Cited 18
Abstract

This paper addresses the data-driven identification of latent representations of partially observed dynamical systems, i.e., dynamical systems for which some components are never observed, with an emphasis on forecasting applications and long-term asymptotic patterns. Whereas state-of-the-art data-driven approaches rely in general on delay embeddings and linear decompositions of the underlying operators, we introduce a framework based on the data-driven identification of an augmented state-space model using a neural-network-based representation. For a given training dataset, it amounts to jointly reconstructing the latent states and learning an ordinary differential equation representation in this space. Through numerical experiments, we demonstrate the relevance of the proposed framework with respect to state-of-the-art approaches in terms of short-term forecasting errors and long-term behavior. We further discuss how the proposed framework relates to the Koopman operator theory and Takens’ embedding theorem.

Delay embedding coordinates provide a simple tool for reconstructing a system limit cycle given partial observations of the dynamics. However, finding a parametric approximation of the dynamics based on the delay-embedded states is far from being straightforward since we do not have any relationships between the spanned limit cycle and a particular form of parametric models. In this work, we propose an alternative based on neural networks and automatic differentiation. We learn both the dynamics and the latent states jointly as a solution of an optimization problem

Full Text
File Pages Size Access
Publisher's official version 18 4 MB Open access
Top of the page