Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Distributionally robust optimization through the lens of submodularity

Distributionally robust optimization is used to solve decision making problems under adversarial uncertainty where the distribution of the uncertainty is itself ambiguous. In this paper, we identify a class of these instances that is solvable in polynomial time by viewing it through the lens of submodularity. We show that the sharpest upper bound on the … Read more

Local Convergence Analysis of an Inexact Trust-Region Method for Nonsmooth Optimization

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1–40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated … Read more

Efficient Proximal Subproblem Solvers for a Nonsmooth Trust-Region Method

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures … Read more

From Optimization to Control: Quasi Policy Iteration

Recent control algorithms for Markov decision processes (MDPs) have been designed using an implicit analogy with well-established optimization algorithms. In this paper, we make this analogy explicit across four problem classes with a unified solution characterization. This novel framework, in turn, allows for a systematic transformation of algorithms from one domain to the other. In … Read more

Exact Matrix Completion via High-Rank Matrices in Sum-of-Squares Relaxations

We study exact matrix completion from partially available data with hidden connectivity patterns. Exact matrix completion was shown to be possible recently by Cosse and Demanet in 2021 with Lasserre’s relaxation using the trace of the variable matrix as the objective function with given data structured in a chain format. In this study, we introduce … Read more

Higher-Order Newton Methods with Polynomial Work per Iteration

\(\) We present generalizations of Newton’s method that incorporate derivatives of an arbitrary order \(d\) but maintain a polynomial dependence on dimension in their cost per iteration. At each step, our \(d^{\text{th}}\)-order method uses semidefinite programming to construct and minimize a sum of squares-convex approximation to the \(d^{\text{th}}\)-order Taylor expansion of the function we wish … Read more

Fixed point continuation algorithm with extrapolation for Schatten p-quasi-norm regularized matrix optimization problems

In this paper, we consider a general low-rank matrix optimization problem which is modeled by a general Schatten p-quasi-norm (${\rm 0<p<1}$) regularized matrix optimization. For this nonconvex nonsmooth and non-Lipschitz matrix optimization problem, based on the matrix p-thresholding operator, we first propose a fixed point continuation algorithm with extrapolation (FPCAe) for solving it. Secondly, we … Read more

DC programming approach for solving a class of bilevel partial facility interdiction problems

We propose a new approach based DC programming for fnding a solution of the partial facility interdiction problem that belongs to the class of bilevel programming. This model was frst considered in the work of Aksen et al. [1] with a heuristic algorithm named multi-start simplex search (MSS). However, because of the big number of … Read more

Continuous exact relaxation and alternating proximal gradient algorithm for partial sparse and partial group sparse optimization problems

In this paper, we consider a partial sparse and partial group sparse optimization problem, where the loss function is a continuously differentiable function (possibly nonconvex), and the penalty term consists of two parts associated with sparsity and group sparsity. The first part is the $\ell_0$ norm of ${\bf x}$, the second part is the $\ell_{2,0}$ … Read more