A quadratically convergent Newton method for vector optimization

We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Grana Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by … Read more

Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a low-dimensional manifold of parameter space along which the regularizer is smooth. … Read more

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators

Recently, Nemirovski’s analysis indicates that the extragradient method has the $O(1/t)$ convergence rate for variational inequalities with Lipschitz continuous monotone operators. For the same problems, in the last decades, we have developed a class of Fej\’er monotone projection and contraction methods. Until now, only convergence results are available to these projection and contraction methods, though … Read more

Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions … Read more

How to generate weakly infeasible semidefinite programs via Lasserre’s relaxations for polynomial optimization

Examples of weakly infeasible semidefinite programs are useful to test whether semidefinite solvers can detect infeasibility. However, finding non trivial such examples is notoriously difficult. This note shows how to use Lasserre’s semidefinite programming relaxations for polynomial optimization in order to generate examples of weakly infeasible semidefinite programs. Such examples could be used to test … Read more

Robust solutions of optimization problems affected by uncertain probabilities

In this paper we focus on robust linear optimization problems with uncertainty regions defined by phi-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on phi-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization … Read more

A lower bound on the optimal self-concordance parameter of convex cones

Let $K \subset \mathbb R^n$ be a regular convex cone, let $e_1,\dots,e_n \in \partial K$ be linearly independent points on the boundary of a compact affine section of the cone, and let $x^* \in K^o$ be a point in the relative interior of this section. For $k = 1,\dots,n$, let $l_k$ be the line through … Read more

AN OPTIMAL ALGORITHM FOR CONSTRAINED DIFFERENTIABLE CONVEX OPTIMIZATION

We describe three algorithms for solving differentiable convex optimization problems constrained to simple sets in $ \R^n $, i.e., sets on which it is easy to project an arbitrary point. The first two algorithms are optimal in the sense that they achieve an absolute precision of $ \varepsilon $ in relation to the optimal value … Read more

Implementation of a block-decomposition algorithm for solving large-scale conic semidefinite programming problems

In this paper, we consider block-decomposition first-order methods for solving large-scale conic semidefinite programming problems. Several ingredients are introduced to speed-up the method in its pure form such as: an aggressive choice of stepsize for performing the extragradient step; use of scaled inner products in the primal and dual spaces; dynamic update of the scaled … Read more

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

This paper presents an accelerated variant of the hybrid proximal extragradient (HPE) method for convex optimization, referred to as the accelerated HPE (A-HPE) method. Iteration-complexity results are established for the A-HPE method, as well as a special version of it, where a large stepsize condition is imposed. Two specific implementations of the A-HPE method are … Read more