Stochastic Aspects of Dynamical Low-Rank Approximation in the Context of Machine Learning

The central challenges of today’s neural network architectures are the prohibitive memory footprint and the training costs associated with determining optimal weights and biases. A large portion of research in machine learning is therefore dedicated to constructing memory-efficient training methods. One promising approach is dynamical low-rank training (DLRT) which represents and trains parameters as a … Read more

A framework for simultaneous aerodynamic design optimization in the presence of chaos

Integrating existing solvers for unsteady partial differential equations (PDEs) into a simultaneous optimization method is challenging due to the forward- in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady PDE constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence … Read more

On an Extension of One-Shots Methods to Incorporate Additional Constraints

For design optimization tasks, quite often a so-called one-shot approach is used. It augments the solution of the state equation with a suitable adjoint solver yielding approximate reduced derivatives that can be used in an optimization iteration to change the design. The coordination of these three iterative processes is well established when only the state … Read more