Exact Matrix Completion via High-Rank Matrices in Sum-of-Squares Relaxations

We study exact matrix completion from partially available data with hidden connectivity patterns. Exact matrix completion was shown to be possible recently by Cosse and Demanet in 2021 with Lasserre’s relaxation using the trace of the variable matrix as the objective function with given data structured in a chain format. In this study, we introduce … Read more

Higher-Order Newton Methods with Polynomial Work per Iteration

\(\) We present generalizations of Newton’s method that incorporate derivatives of an arbitrary order \(d\) but maintain a polynomial dependence on dimension in their cost per iteration. At each step, our \(d^{\text{th}}\)-order method uses semidefinite programming to construct and minimize a sum of squares-convex approximation to the \(d^{\text{th}}\)-order Taylor expansion of the function we wish … Read more

Fixed point continuation algorithm with extrapolation for Schatten p-quasi-norm regularized matrix optimization problems

In this paper, we consider a general low-rank matrix optimization problem which is modeled by a general Schatten p-quasi-norm (${\rm 0<p<1}$) regularized matrix optimization. For this nonconvex nonsmooth and non-Lipschitz matrix optimization problem, based on the matrix p-thresholding operator, we first propose a fixed point continuation algorithm with extrapolation (FPCAe) for solving it. Secondly, we … Read more

DC programming approach for solving a class of bilevel partial facility interdiction problems

We propose a new approach based DC programming for fnding a solution of the partial facility interdiction problem that belongs to the class of bilevel programming. This model was frst considered in the work of Aksen et al. [1] with a heuristic algorithm named multi-start simplex search (MSS). However, because of the big number of … Read more

Continuous exact relaxation and alternating proximal gradient algorithm for partial sparse and partial group sparse optimization problems

In this paper, we consider a partial sparse and partial group sparse optimization problem, where the loss function is a continuously differentiable function (possibly nonconvex), and the penalty term consists of two parts associated with sparsity and group sparsity. The first part is the $\ell_0$ norm of ${\bf x}$, the second part is the $\ell_{2,0}$ … Read more

Efficient Computation of the Approximation Quality in Sandwiching Algorithms

Computing the approximation quality is a crucial step in every iteration of Sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two quality indicators often used in these algorithms are polyhedral gauge and epsilon indicator. In this article, we develop an algorithm to compute the polyhedral gauge … Read more

Analysis of a Class of Minimization Problems Lacking Lower Semicontinuity

The minimization of non-lower semicontinuous functions is a difficult topic that has been minimally studied. Among such functions is a Heaviside composite function that is the composition of a Heaviside function with a possibly nonsmooth multivariate function. Unifying a statistical estimation problem with hierarchical selection of variables and a sample average approximation of composite chance … Read more

Preconditioning for Generelized Jacobians with the ω-Condition Number

Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the ω-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more

Error estimate for regularized optimal transport problems via Bregman divergence

Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes … Read more