An Inexact General Descent Method with Applications in Differential Equation-Constrained Optimization

In many applications, gradient evaluations are inherently approximate, motivating the development of optimization methods that remain reliable under inexact first-order information. A common strategy in this context is adaptive evaluation, whereby coarse gradients are used in early iterations and refined near a minimizer. This is particularly relevant in differential equation–constrained optimization (DECO), where discrete adjoint … Read more