An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems

This paper presents an accelerated composite gradient (ACG) variant, referred to as the AC-ACG method, for solving nonconvex smooth composite minimization problems. As opposed to well-known ACG variants that are either based on a known Lipschitz gradient constant or a sequence of maximum observed curvatures, the current one is based on a sequence of average … Read more

Derivative-Free Superiorization: Principle and Algorithm

The superiorization methodology is intended to work with input data of constrained minimization problems, that is, a target function and a set of constraints. However, it is based on an antipodal way of thinking to what leads to constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to … Read more

Domain-Driven Solver (DDS): a MATLAB-based Software Package for Convex Optimization Problems in Domain-Driven Form

Domain-Driven Solver (DDS) is a MATLAB-based software package for convex optimization problems in Domain-Driven form [11]. The current version of DDS accepts every combination of the following function/set constraints: (1) symmetric cones (LP, SOCP, and SDP); (2) quadratic constraints; (3) direct sums of an arbitrary collection of 2-dimensional convex sets defined as the epigraphs of … Read more

On the intrinsic core of convex cones in real linear spaces

Convex cones play an important role in nonlinear analysis and optimization theory. In particular, specific normal cones and tangent cones are known to be convex cones, and it is a crucial fact that they are useful geometric objects for describing optimality conditions. As important applications (especially, in the fields of optimal control with PDE constraints, … Read more

A New Sequential Updating Scheme of the Lagrange Multiplier for Multi-Block Linearly Constrained Separable Convex Optimization with Relaxed Step Sizes

In various applications such as signal/image processing, data mining, statistical learning and etc., the multi-block linearly constrained separable convex optimization is frequently used, where the objective function is the sum of multiple individual convex functions, and the major constraints are linear. A classical method for solving such kind of optimization problem could be the alternating … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Relations Between Abs-Normal NLPs and MPCCs Part 2: Weak Constraint Qualifications

This work continues an ongoing effort to compare non-smooth optimization problems in abs-normal form to Mathematical Programs with Complementarity Constraints (MPCCs). We study general Nonlinear Programs with equality and inequality constraints in abs-normal form, so-called Abs-Normal NLPs, and their relation to equivalent MPCC reformulations. We introduce the concepts of Abadie’s and Guignard’s kink qualification and … Read more

Gaining traction – On the convergence of an inner approximation scheme for probability maximization

We analyze an inner approximation scheme for probability maximization. The approach was proposed in Fabian, Csizmas, Drenyovszki, Van Ackooij, Vajnai, Kovacs, Szantai (2018) Probability maximization by inner approximation, Acta Polytechnica Hungarica 15:105-125, as an analogue of a classic dual approach in the handling of probabilistic constraints. Even a basic implementation of the maximization scheme proved … Read more

Error Bounds and Singularity Degree in Semidefinite Programming

In semidefinite programming a proposed optimal solution may be quite poor in spite of having sufficiently small residual in the optimality conditions. This issue may be framed in terms of the discrepancy between forward error (the unmeasurable `true error’) and backward error (the measurable violation of optimality conditions). In his seminal work, Sturm provided an … Read more

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints

Stochastic convex optimization problems with expectation constraints (SOECs) are encountered in statistics and machine learning, business, and engineering. In data-rich environments, the SOEC objective and constraints contain expectations defined with respect to large datasets. Therefore, efficient algorithms for solving such SOECs need to limit the fraction of data points that they use, which we refer … Read more