Higher-Order Newton Methods with Polynomial Work per Iteration

\(\) We present generalizations of Newton’s method that incorporate derivatives of an arbitrary order \(d\) but maintain a polynomial dependence on dimension in their cost per iteration. At each step, our \(d^{\text{th}}\)-order method uses semidefinite programming to construct and minimize a sum of squares-convex approximation to the \(d^{\text{th}}\)-order Taylor expansion of the function we wish … Read more

A Reduced Jacobian Scheme with Full Convergence for Multicriteria Optimization

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more

Convergence Results for Primal-Dual Algorithms in the Presence of Adjoint Mismatch

Most optimization problems arising in imaging science involve high-dimensional linear operators and their adjoints. In the implementations of these operators, approximations may be introduced for various practical considerations (e.g., memory limitation, computational cost, convergence speed), leading to an adjoint mismatch. This occurs for the X-ray tomographic inverse problems found in Computed Tomography (CT), where the … Read more

Unmatched Preconditioning of the Proximal Gradient Algorithm

This works addresses the resolution of penalized least-squares problems using the proximal gradient algorithm (PGA). It is known that PGA can be accelerated by preconditioning strategies. However, typical effective choices of preconditioners may correspond to intricate matrices that are not easily inverted, and lead to an increased complexity in the computation of the proximity step. … Read more

Bolstering Stochastic Gradient Descent with Model Building

Stochastic gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates for solving machine learning problems. These rates are obtained especially when these algorithms are fine-tuned for the application at hand. Although this tuning process can require large computational costs, recent work has shown that these costs can be … Read more

SABRINA: A Stochastic Subspace Majorization-Minimization Algorithm

A wide class of problems involves the minimization of a coercive and differentiable function $F$ on $\mathbb{R}^N$ whose gradient cannot be evaluated in an exact manner. In such context, many existing convergence results from standard gradient-based optimization literature cannot be directly applied and robustness to errors in the gradient is not necessarily guaranteed. This work … Read more

Time-Domain Decomposition for Mixed-Integer Optimal Control Problems

We consider mixed-integer optimal control problems, whose optimality conditions involve global combinatorial optimization aspects for the corresponding Hamiltonian pointwise in time. We propose a time-domain decomposition, which makes this problem class accessible for mixed-integer programming using parallel-in-time direct discretizations. The approach is based on a decomposition of the optimality system and the interpretation of the … Read more

Time-Domain Decomposition for Optimal Control Problems Governed by Semilinear Hyperbolic Systems with Mixed Two-Point Boundary Conditions

In this article, we continue our work (Krug et al., 2021) on time-domain decomposition of optimal control problems for systems of semilinear hyperbolic equations in that we now consider mixed two-point boundary value problems and provide an in-depth well-posedness analysis. The more general boundary conditions significantly enlarge the scope of applications, e.g., to hyperbolic problems … Read more

On the Convergence Results of a class of Nonmonotone Accelerated Proximal Gradient Methods for Nonsmooth and Nonconvex Minimization Problems

In this paper, we consider a class of nonsmooth problem that is the sum of a Lipschitz differentiable function and a nonsmooth and proper lower semicontinuous function. We discuss here the convergence rate of the function values for a nonmonotone accelerated proximal gradient method, which proposed in “Huan Li and Zhouchen Lin: Accelerated proximal gradient … Read more

A Nonmonontone Accelerated Proximal Gradient Method with Variable Stepsize Strategy for Nonsmooth and Nonconvex Minimization Problems

We propose a new nonmonontone accelerated proximal gradient method with variable stepsize strategy for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. In this algorithm, the objective function value be allowed to increase discontinuously, but is decreasing from the overall point of view. The variable stepsize strategy don’t … Read more