A Comparison of Nonsmooth, Nonconvex, Constrained Optimization Solvers for the Design of Time-Delay Compensators

We present a detailed set of performance comparisons of two state-of-the-art solvers for the application of designing time-delay compensators, an important problem in the field of robust control. Formulating such robust control mechanics as constrained optimization problems often involves objective and constraint functions that are both nonconvex and nonsmooth, both of which present significant challenges … Read more

Basis Pursuit Denoise with Nonsmooth Constraints

Level-set optimization formulations with data-driven constraints minimize a regularization functional subject to matching observations to a given error level. These formulations are widely used, particularly for matrix completion and sparsity promotion in data interpolation and denoising. The misfit level is typically measured in the l2 norm, or other smooth metrics. In this paper, we present … Read more

Parallelizing Subgradient Methods for the Lagrangian Dual in Stochastic Mixed-Integer Programming

The dual decomposition of stochastic mixed-integer programs can be solved by the projected subgradient algorithm. We show how to make this algorithm more amenable to parallelization in a master-worker model by describing two approaches, which can be combined in a natural way. The first approach partitions the scenarios into batches, and makes separate use of … Read more

Acceleration of Primal-Dual Methods by Preconditioning and Simple Subproblem Procedures

Primal-Dual Hybrid Gradient (PDHG) and Alternating Direction Method of Multipliers (ADMM) are two widely-used first-order optimization methods. They reduce a difficult problem to simple subproblems, so they are easy to implement and have many applications. As first-order methods, however, they are sensitive to problem conditions and can struggle to reach the desired accuracy. To improve … Read more

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

We introduce a sequential optimality condition for locally Lipschitz constrained nonsmooth optimization, verifiable just using derivative information, and which holds even in the absence of any constraint qualification. The proposed sequential optimality condition is not only novel for nonsmooth problems, but brings new insights for the smooth case as well. We present a practical algorithm … Read more

A Unified Framework for Sparse Relaxed Regularized Regression: SR3

Regularized regression problems are ubiquitous in statistical modeling, signal processing, and machine learning. Sparse regression in particular has been instrumental in scientific model discovery, including compressed sensing applications, vari- able selection, and high-dimensional analysis. We propose a broad framework for sparse relaxed regularized regression, called SR3. The key idea is to solve a relaxation of … Read more

Nonmonotonicity and Quasiconvexity on Equilibrium Problems

In this note, some results are introduced considering the assumptions of quasiconvexity and nonmonotonicity, finally an application and an idea to solve the quasiconvex equilibrium problem are presented considering these new results. Article Download View Nonmonotonicity and Quasiconvexity on Equilibrium Problems

Inexact alternating projections on nonconvex sets

Given two arbitrary closed sets in Euclidean space, a simple transversality condition guarantees that the method of alternating projections converges locally, at linear rate, to a point in the intersection. Exact projection onto nonconvex sets is typically intractable, but we show that computationally-cheap inexact projections may suffice instead. In particular, if one set is defined … Read more

Performance indicators in multiobjective optimization

In recent years, the development of new algorithms for multiobjective optimization has considerably grown. A large number of performance indicators has been introduced to measure the quality of Pareto front approximations produced by these algorithms. In this work, we propose a review of a total of 63 performance indicators partitioned into four groups according to … Read more

Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis

Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too … Read more