Traditional model reduction derives reduced models from large-scale systems in a one-time computationally expensive offline (training) phase and then evaluates reduced models in an online phase to rapidly predict system outputs; however, this offline/online splitting means that reduced models can be expected to faithfully predict outputs only for system behavior that has been incorporated into the reduced models during the offline phase. This work considers model reduction with the online adaptive empirical interpolation method (AADEIM) that adapts reduced models in the online phase to system behavior that was not anticipated in the offline phase by deriving updates from a few samples of the states of the large-scale systems. The contribution of this work is an analysis of the AADEIM sampling strategy for deciding which parts of the large-scale states to sample to learn reduced-model updates. The analysis shows that the AADEIM sampling strategy is optimal up to a factor 2. Numerical results demonstrate the theoretical results by comparing the quasi-optimal AADEIM sampling strategy to other sampling strategies on various examples.

Quasi-Optimal Sampling to Learn Basis Updates for Online Adaptive Model Reduction with Adaptive Empirical Interpolation

Cortinovis A.;Massei S.;
2020

Abstract

Traditional model reduction derives reduced models from large-scale systems in a one-time computationally expensive offline (training) phase and then evaluates reduced models in an online phase to rapidly predict system outputs; however, this offline/online splitting means that reduced models can be expected to faithfully predict outputs only for system behavior that has been incorporated into the reduced models during the offline phase. This work considers model reduction with the online adaptive empirical interpolation method (AADEIM) that adapts reduced models in the online phase to system behavior that was not anticipated in the offline phase by deriving updates from a few samples of the states of the large-scale systems. The contribution of this work is an analysis of the AADEIM sampling strategy for deciding which parts of the large-scale states to sample to learn reduced-model updates. The analysis shows that the AADEIM sampling strategy is optimal up to a factor 2. Numerical results demonstrate the theoretical results by comparing the quasi-optimal AADEIM sampling strategy to other sampling strategies on various examples.
978-1-5386-8266-1
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1136978
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact