Accelerated first-order methods for a class of semidefinite programs

This paper introduces a new storage-optimal first-order method (FOM), CertSDP, for solving a special class of semidefinite programs (SDPs) to high accuracy. The class of SDPs that we consider, the exact QMP-like SDPs , is characterized by low-rank solutions, a priori knowledge of the restriction of the SDP solution to a small subspace, and standard … Read more

Metrizing Fairness

We study supervised learning problems for predicting properties of individuals who belong to one of two demographic groups, and we seek predictors that are fair according to statistical parity. This means that the distributions of the predictions within the two groups should be close with respect to the Kolmogorov distance, and fairness is achieved by … Read more

Complexity-optimal and Parameter-free First-order Methods for Finding Stationary Points of Composite Optimization Problems

This paper develops and analyzes an accelerated proximal descent method for finding stationary points of nonconvex composite optimization problems. The objective function is of the form f+h where h is a proper closed convex function, f is a differentiable function on the domain of h, and ∇f is Lipschitz continuous on the domain of h. … Read more

A Reduced Jacobian Scheme with Full Convergence for Multicriteria Optimization

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more

On complexity constants of linear and quadratic models for derivative-free trust-region algorithms

Complexity analysis has become an important tool in the convergence analysis of optimization algorithms. For derivative-free optimization algorithms, it is not different. Interestingly, several constants that appear when developing complexity results hide the dimensions of the problem. This work organizes several results in literature about bounds that appear in derivative-free trust-region algorithms based on linear … Read more

A Constraint Dissolving Approach for Nonsmooth Optimization over the Stiefel Manifold

This paper focus on the minimization of a possibly nonsmooth objective function over the Stiefel manifold. The existing approaches either lack efficiency or can only tackle prox-friendly objective functions. We propose a constraint dissolving function named NCDF and show that it has the same first-order stationary points and local minimizers as the original problem in … Read more

First- and Second-Order High Probability Complexity Bounds for Trust-Region Methods with Noisy Oracles

In this paper, we present convergence guarantees for a modified trust-region method designed for minimizing objective functions whose value is computed with noise and for which gradient and Hessian estimates are inexact and possibly random. In order to account for the noise, the method utilizes a relaxed step acceptance criterion and a cautious trust-region radius … Read more

Integral Global Optimality Conditions and an Algorithm for Multiobjective Problems

In this work, we propose integral global optimality conditions for multiobjective problems not necessarily differentiable. The integral characterization, already known for single objective problems, are extended to multiobjective problems by weighted sum and Chebyshev weighted scalarizations. Using this last scalarization, we propose an algorithm for obtaining an approximation of the weak Pareto front whose effectiveness … Read more

Small polygons with large area

A polygon is {\em small} if it has unit diameter. The maximal area of a small polygon with a fixed number of sides $n$ is not known when $n$ is even and $n\geq14$. We determine an improved lower bound for the maximal area of a small $n$-gon for this case. The improvement affects the $1/n^3$ … Read more

Accelerating nuclear-norm regularized low-rank matrix optimization through Burer-Monteiro decomposition

This work proposes a rapid algorithm, BM-Global, for nuclear-norm-regularized convex and low-rank matrix optimization problems. BM-Global efficiently decreases the objective value via low-cost steps leveraging the nonconvex but smooth Burer-Monteiro (BM) decomposition, while effectively escapes saddle points and spurious local minima ubiquitous in the BM form to obtain guarantees of fast convergence rates to the … Read more