On the Optimal Proximal Parameter of an ADMM-like Splitting Method for Separable Convex Programming

An ADMM-based splitting method is proposed in [11] for solving convex minimization problems with linear constraints and multi-block separable objective functions; while a relatively large proximal parameter is required for theoretically ensuring the convergence. In this paper, we further study this method and find its optimal (smallest) proximal parameter. For succinctness, we focus on the … Read more

Manifold Sampling for Optimization of Nonconvex Functions that are Piecewise Linear Compositions of Smooth Components

We develop a manifold sampling algorithm for the minimization of a nonsmooth composite function $f \defined \psi + h \circ F$ when $\psi$ is smooth with known derivatives, $h$ is a known, nonsmooth, piecewise linear function, and $F$ is smooth but expensive to evaluate. The trust-region algorithm classifies points in the domain of $h$ as … Read more

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming

The alternating direction method of multipliers (ADMM) is being widely used in a variety of areas; its different variants tailored for different application scenarios have also been deeply researched in the literature. Among them, the linearized ADMM has received particularly wide attention from many areas because of its efficiency and easy implementation. To theoretically guarantee … Read more

Linearized version of the generalized alternating direction method of multipliers for three-block separable convex minimization problem

Recently, the generalized alternating direction method of multipliers (GADMM) proposed by Eckstein and Bertsekas has received wide attention, especially with respect to numerous applications. In this paper, we develop a new linearized version of generalized alternating direction method of multipliers (L-GADMM) for the linearly constrained separable convex programming whose objective functions are the sum of … Read more

Constraints reduction programming by subset selection: a study from numerical aspect

We consider a novel method entitled constraints reduction programming which aims to reduce the constraints in an optimization model. This method is derived from various applications of management or decision making, and has potential ability to handle a wider range of applications. Due to the high combinatorial complexity of underlying model, it is difficult to … Read more

Inner Conditions for Error Bounds and Metric Subregulerity of Multifunctions

We introduce a new class of sets, functions and multifunctions which is shown to be large and to enjoy some nice common properties with the convex setting. Error bounds for objects attached to this class are characterized in terms of inner conditions of Abadie’s type, that is conditions bearing on normal cones and coderivatives at … Read more

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

We investigate the convergence properties of incremental mirror descent type subgradient algorithms for minimizing the sum of convex functions. In each step we only evaluate the subgradient of a single component function and mirror it back to the feasible domain, which makes iterations very cheap to compute. The analysis is made for a randomized selection … Read more

Convergence Analysis of Processes with Valiant Projection Operators in Hilbert Space

Convex feasibility problems require to find a point in the intersection of a finite family of convex sets. We propose to solve such problems by performing set-enlargements and applying a new kind of projection operators called valiant projectors. A valiant projector onto a convex set implements a special relaxation strategy, proposed by Goffin in 1971, … Read more

Dynamic Scaling and Submodel Selection in Bundle Methods for Convex Optimization

Bundle methods determine the next candidate point as the minimizer of a cutting model augmented with a proximal term. We propose a dynamic approach for choosing a quadratic proximal term based on subgradient information from past evaluations. For the special case of convex quadratic functions, conditions are studied under which this actually reproduces the Hessian. … Read more

Computing closest stable non-negative matrices

Problem of finding the closest stable matrix for a dynamical system has many applications. It is well studied both for continuous and discrete-time systems, and the corresponding optimization problems are formulated for various matrix norms. As a rule, non-convexity of these formulations does not allow finding their global solutions. In this paper, we analyze positive … Read more