Finding the largest low-rank clusters with Ky Fan 2-k-norm and l1-norm

We propose a convex optimization formulation with the Ky Fan 2-k-norm and l1-norm to fi nd k largest approximately rank-one submatrix blocks of a given nonnegative matrix that has low-rank block diagonal structure with noise. We analyze low-rank and sparsity structures of the optimal solutions using properties of these two matrix norms. We show that, under … Read more

About the Convexity of a Special Function on Hadamard Manifolds.

In this article we provide an erratum to Proposition 3.4 of E.A. Papa Quiroz and P.R. Oliveira. Proximal Point Methods for Quasiconvex and Convex Functions with Bregman Distances on Hadamard Manifolds, Journal of Convex Analysis 16 (2009), 49-69. More specifically, we prove that the function defined by the product of a fixed vector in the … Read more

Inverse optimal control with polynomial optimization

In the context of optimal control, we consider the inverse problem of Lagrangian identification given system dynamics and optimal trajectories. Many of its theoretical and practical aspects are still open. Potential applications are very broad as a reliable solution to the problem would provide a powerful modeling tool in many areas of experimental science. We … Read more

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We assume the whole objective function is strongly convex. Such problems often arise in machine learning, known … Read more

Direct search based on probabilistic descent

Direct-search methods are a class of popular derivative-free algorithms characterized by evaluating the objective function using a step size and a number of (polling) directions. When applied to the minimization of smooth functions, the polling directions are typically taken from positive spanning sets which in turn must have at least n+1 vectors in an n-dimensional … Read more

On the update of constraint preconditioners for regularized KKT systems

We address the problem of preconditioning sequences of regularized KKT systems, such as those arising in Interior Point methods for convex quadratic programming. In this case, Constraint Preconditioners (CPs) are very effective and widely used; however, when solving large-scale problems, the computational cost for their factorization may be high, and techniques for approximating them appear … Read more

Intermediate gradient methods for smooth convex problems with inexact oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), … Read more

Equivariant Perturbation in Gomory and Johnson’s Infinite Group Problem. III. Foundations for the k-Dimensional Case with Applications to k=2

We develop foundational tools for classifying the extreme valid functions for the k-dimensional infinite group problem. In particular, (1) we present the general regular solution to Cauchy’s additive functional equation on bounded convex domains. This provides a k-dimensional generalization of the so-called interval lemma, allowing us to deduce affine properties of the function from certain … Read more

Parallel Multi-Block ADMM with o(1/k) Convergence

This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM). The algorithm decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This Jacobian-type algorithm is well suited for distributed computing and is particularly attractive for solving certain large-scale problems. This paper introduces … Read more

First-order methods with inexact oracle: the strongly convex case

The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. We introduce the notion of (delta,L,mu)-oracle, that can be seen as an extension of the inexact (delta,L)-oracle previously introduced, taking into account strong convexity. We consider different examples of (delta,L,mu)-oracle: … Read more