A Riemannian rank-adaptive method for low-rank optimization

This paper presents an algorithm that solves optimization problems on a matrix manifold $\mathcal{M} \subseteq \mathbb{R}^{m \times n}$ with an additional rank inequality constraint. The algorithm resorts to well-known Riemannian optimization schemes on fixed-rank manifolds, combined with new mechanisms to increase or decrease the rank. The convergence of the algorithm is analyzed and a weighted … Read more

Constrained Optimization with Low-Rank Tensors and Applications to Parametric Problems with PDEs

Low-rank tensor methods provide efficient representations and computations for high-dimensional problems and are able to break the curse of dimensionality when dealing with systems involving multiple parameters. We present algorithms for constrained nonlinear optimization problems that use low-rank tensors and apply them to optimal control of PDEs with uncertain parameters and to parametrized variational inequalities. … Read more

A multiplier method with a class of penalty functions for convex programming

We consider a class of augmented Lagrangian methods for solving convex programming problems with inequality constraints. This class involves a family of penalty functions and specific values of parameters $p,q,\tilde y \in R$ and $c>0$. The penalty family includes the classical modified barrier and the exponential function. The associated proximal method for solving the dual … Read more

Multistep stochastic mirror descent for risk-averse convex stochastic programs based on extended polyhedral risk measures

We consider risk-averse convex stochastic programs expressed in terms of extended polyhedral risk measures. We derive computable confidence intervals on the optimal value of such stochastic programs using the Robust Stochastic Approximation and the Stochastic Mirror Descent (SMD) algorithms. When the objective functions are uniformly convex, we also propose a multistep extension of the Stochastic … Read more

Improved pointwise iteration-complexity of a regularized ADMM and of a regularized non-Euclidean HPE framework

This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new method is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity … Read more

A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of … Read more

Regularized Interior Proximal Alternating Direction Method for Separable Convex Optimization Problems

In this article we present a version of the proximal alternating direction method for a convex problem with linear constraints and a separable objective function, in which the standard quadratic regularizing term is replaced with an interior proximal metric for those variables that are required to satisfy some additional convex constraints. Moreover, the proposed method … Read more

Conditional gradient type methods for composite nonlinear and stochastic optimization

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a (strongly) convex regularization term. While including a strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its rate of … Read more

Distributed Stochastic Variance Reduced Gradient Methods and a Lower Bound for Communication Complexity

We study distributed optimization algorithms for minimizing the average of convex functions. The applications include empirical risk minimization problems in statistical machine learning where the datasets are large and have to be stored on different machines. We design a distributed stochastic variance reduced gradient algorithm that, under certain conditions on the condition number, simultaneously achieves … Read more