A fundamental challenge in learning an unknown dynamical system is to reduce model uncertainty by making measurements while maintaining safety. In this work, we formulate a mathematical definition of what it means to safely learn a dynamical system by sequentially deciding where to initialize the next trajectory. In our framework, the state of the system is required to stay within a safety region for a horizon of \(T\) time steps under the action of all dynamical systems that (i) belong to a given initial uncertainty set, and (ii) are consistent with the information gathered so far.
For our first set of results, we consider the setting of safely learning a linear dynamical system involving \(n\) states. For the case \(T=1\), we present a linear programming-based algorithm that either safely recovers the true dynamics from at most $n$ trajectories, or certifies that safe learning is impossible. For \(T=2\), we give a semidefinite representation of the set of safe initial conditions and show that \(\lceil n/2 \rceil\) trajectories generically suffice for safe learning. Finally, for \(T = \infty\), we provide semidefinite representable inner approximations of the set of safe initial conditions and show that one trajectory generically suffices for safe learning.
Our second set of results concerns the problem of safely learning a general class of nonlinear dynamical systems. For the case \(T=1\), we give a second-order cone programming based representation of the set of safe initial conditions. For \(T=\infty\), we provide semidefinite representable inner approximations to the set of safe initial conditions. We show how one can safely collect trajectories and fit a polynomial model of the nonlinear dynamics that is consistent with the initial uncertainty set and best agrees with the observations.