Convergence of Proximal Gradient Algorithm in the Presence of Adjoint Mismatch

We consider the proximal gradient algorithm for solving penalized least-squares minimization problems arising in data science. This first-order algorithm is attractive due to its flexibility and minimal memory requirements allowing to tackle large-scale minimization problems involving non-smooth penalties. However, for problems such as X-ray computed tomography, the applicability of the algorithm is dominated by the … Read more

New efficient approach in finding a zero of a maximal monotone operator

In the paper, we provide a new efficient approach to find a zero of a maximal monotone operator under very mild assumptions. Using a regularization technique and the proximal point algorithm, we can construct a sequence that converges strongly to a solution with at least linear convergence rate. Article Download View New efficient approach in … Read more

Finding the strongest stable massless column with a follower load and relocatable concentrated masses

We consider the problem of optimal placement of concentrated masses along a massless elastic column that is clamped at one end and loaded by a nonconservative follower force at the free end. The goal is to find the largest possible interval such that the variation in the loading parameter within this interval preserves stability of … Read more

A FISTA-type first order algorithm on composite optimization problems that is adaptable to the convex situation

In this note, we propose a FISTA-type first order algorithm, VAR-FISTA, to solve a composite optimization problem. A distinctive feature of VAR-FISTA is its ability to exploit the convexity of the function in the problem, resulting in an improved iteration complexity when the function is convex compared to when it is nonconvex. The iteration complexity … Read more

Iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian method based on the classical Lagrangian function and a full Lagrange multiplier update

This paper establishes the iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian (IAPIAL) method for solving linearly constrained smooth nonconvex composite optimization problems which is based on the classical Lagrangian function and, most importantly, performs a full Lagrangian multiplier update, i.e., no shrinking factor is incorporated on it. More specifically, each IAPIAL iteration consists … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more

On the abs-polynomial expansion of piecewise smooth functions

Tom Streubel has observed that for functions in abs-normal form, generalized Taylor expansions of arbitrary order $\bd \!- \!1$ can be generated by algorithmic piecewise differentiation. Abs-normal form means that the real or vector valued function is defined by an evaluation procedure that involves the absolute value function $|\cdot|$ apart from arithmetic operations and $\bd$ … Read more

Analysis of Energy Markets Modeled as Equilibrium Problems with Equilibrium Constraints

Equilibrium problems with equilibrium constraints are challenging both theoretically and computationally. However, they are suitable/adequate modeling formulations in a number of important areas, such as energy markets, transportation planning, and logistics. Typically, these problems are characterized as bilevel Nash-Cournot games. For instance, determin- ing the equilibrium price in an energy market involves top-level decisions of … Read more

Characterization of an Anomalous Behavior of a Practical Smoothing Technique

A practical smoothing method was analyzed and tested against state-of-the-art solvers for some non-smooth optimization problems in [BSS20a; BSS20b]. This method can be used to smooth the value functions and solution mappings of fully parameterized convex problems under mild conditions. In general, the smoothing of the value function lies from above the true value function … Read more

The block mutual coherence property condition for signal recovery

Compressed sensing shows that a sparse signal can stably be recovered from incomplete linear measurements. But, in practical applications, some signals have additional structure, where the nonzero elements arise in some blocks. We call such signals as block-sparse signals. In this paper, the $\ell_2/\ell_1-\alpha\ell_2$ minimization method for the stable recovery of block-sparse signals is investigated. … Read more