Improved algorithms for convex minimization in relative scale

In this paper we propose two modifications to Nesterov’s algorithms for minimizing convex functions in relative scale. The first is based on a bisection technique and leads to improved theoretical iteration complexity, and the second is a heuristic for avoiding restarting behavior. The fastest of our algorithms produces a solution within relative error O(1/k) of … Read more

Equivalence of Convex Problem Geometry and Computational Complexity in the Separation Oracle Model

Consider the following supposedly-simple problem: “compute x \in S” where S is a convex set conveyed by a separation oracle, with no further information (e.g., no bounding ball containing or intersecting S, etc.). Our interest in this problem stems from fundamental issues involving the interplay of computational complexity, the geometry of S, and the stability … Read more

Risk averse feasible policies for large-scale multistage stochastic linear programs

We consider risk-averse formulations of stochastic linear programs having a structure that is common in real-life applications. Specifically, the optimization problem corresponds to controlling over a certain horizon a system whose dynamics is given by a transition equation depending affinely on an interstage dependent stochastic process. We put in place a rolling-horizon time consistent policy. … Read more

Nuclear norm minimization for the planted clique and biclique problems

We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been … Read more

Generic identifiability and second-order sufficiency in tame convex optimization

We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, “tame”). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is “partly smooth”, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality conditions … Read more

Approximate Level Method

In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising … Read more

A Randomized Cutting Plane Method with Probabilistic Geometric Convergence

We propose a randomized method for general convex optimization problems; namely, the minimization of a linear function over a convex body. The idea is to generate N random points inside the body, choose the best one and cut the part of the body defined by the linear constraint. We first analyze the convergence properties of … Read more

Fixed point and Bregman iterative methods for matrix rank minimization

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The linearly constrained nuclear norm minimization is a convex relaxation of this problem. Although it can be cast as a semidefinite programming problem, the nuclear norm minimization problem is expensive to solve when the … Read more

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Approximate proximal point algorithms (abbreviated as APPAs) are classical approaches for convex optimization problems and monotone variational inequalities. To solve the subproblems of these algorithms, the projection method takes the iteration in form of $u^{k+1} = P_{\Omega}[u^k-\alpha_k d^k]$. Interestingly, many of them can be paired such that $%\exists \tilde{u}^k, \tilde{u}^k = P_{\Omega}[u^k – \beta_kF(v^k)] = … Read more

A full-Newton step infeasible interior-point algorithm for linear programming based on a kernel function

This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim., 16(4):1110–1136, 2006). We introduce a kernel function in the algorithm. For $p\in[0,1)$, the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, … Read more