Moreau envelope of supremum functions with applications to infinite and stochastic programming

In this paper, we investigate the Moreau envelope of the supremum of a family of convex, proper, and lower semicontinuous functions. Under mild assumptions, we prove that the Moreau envelope of a supremum is the supremum of Moreau envelopes, which allows us to approximate possibly nonsmooth supremum functions by smooth functions that are also the … Read more

A Primal-Dual Algorithm for Risk Minimization

In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical applications as, e.g., optimization problems constrained by partial differential equations with uncertain inputs. Unfortunately, for many popular risk models including the coherent risk measures, the resulting risk-averse objective function is … Read more

Exterior-point Optimization for Nonconvex Learning

In this paper we present the nonconvex exterior-point optimization solver (NExOS)—a novel first-order algorithm tailored to constrained nonconvex learning problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex learning problems have this structure including … Read more

Faster Lagrangian-Based Methods in Convex Optimization

In this paper, we aim at unifying, simplifying, and improving the convergence rate analysis of Lagrangian-based methods for convex optimization problems. We first introduce the notion of nice primal algorithmic map, which plays a central role in the unification and in the simplification of the analysis of all Lagrangian-based methods. Equipped with a nice primal … Read more

Convergence of Proximal Gradient Algorithm in the Presence of Adjoint Mismatch

We consider the proximal gradient algorithm for solving penalized least-squares minimization problems arising in data science. This first-order algorithm is attractive due to its flexibility and minimal memory requirements allowing to tackle large-scale minimization problems involving non-smooth penalties. However, for problems such as X-ray computed tomography, the applicability of the algorithm is dominated by the … Read more

New efficient approach in finding a zero of a maximal monotone operator

In the paper, we provide a new efficient approach to find a zero of a maximal monotone operator under very mild assumptions. Using a regularization technique and the proximal point algorithm, we can construct a sequence that converges strongly to a solution with at least linear convergence rate. ArticleDownload View PDF

Finding the strongest stable massless column with a follower load and relocatable concentrated masses

We consider the problem of optimal placement of concentrated masses along a massless elastic column that is clamped at one end and loaded by a nonconservative follower force at the free end. The goal is to find the largest possible interval such that the variation in the loading parameter within this interval preserves stability of … Read more

A FISTA-type first order algorithm on composite optimization problems that is adaptable to the convex situation

In this note, we propose a FISTA-type first order algorithm, VAR-FISTA, to solve a composite optimization problem. A distinctive feature of VAR-FISTA is its ability to exploit the convexity of the function in the problem, resulting in an improved iteration complexity when the function is convex compared to when it is nonconvex. The iteration complexity … Read more

Iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian method based on the classical Lagrangian function and a full Lagrange multiplier update

This paper establishes the iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian (IAPIAL) method for solving linearly constrained smooth nonconvex composite optimization problems which is based on the classical Lagrangian function and, most importantly, performs a full Lagrangian multiplier update, i.e., no shrinking factor is incorporated on it. More specifically, each IAPIAL iteration consists … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more