Proximity measures based on KKT points for constrained multi-objective optimization

An important aspect of optimization algorithms, for instance evolutionary algorithms, are termination criteria that measure the proximity of the found solution to the optimal solution set. A frequently used approach is the numerical verification of necessary optimality conditions such as the Karush-Kuhn-Tucker (KKT) conditions. In this paper, we present a proximity measure which characterizes the … Read more

On Constraint Qualifications for Second-Order Optimality Conditions Depending on a Single Lagrange Multiplier.

Second-order optimality conditions play an important role in continuous optimization. In this paper, we present and discuss new constraint qualifications to ensure the validity of some well-known second-order optimality conditions. Our main interest is on second-order conditions that can be associated with numerical methods for solving constrained optimization problems. Such conditions depend on a single … Read more

Data-compatibility of algorithms

The data-compatibility approach to constrained optimization, proposed here, strives to a point that is “close enough” to the solution set and whose target function value is “close enough” to the constrained minimum value. These notions can replace analysis of asymptotic convergence to a solution point of infinite sequences generated by specific algorithms. We consider a … Read more

Dynamic string-averaging CQ-methods for the split feasibility problem with percentage violation constraints arising in radiation therapy treatment planning

In this paper we study a feasibility-seeking problem with percentage violation con- straints. These are additional constraints, that are appended to an existing family of constraints, which single out certain subsets of the existing constraints and declare that up to a speci ed fraction of the number of constraints in each subset is allowed to be … Read more

An inexact augmented Lagrangian method for nonsmooth optimization on Riemannian manifold

We consider a nonsmooth optimization problem on Riemannian manifold, whose objective function is the sum of a differentiable component and a nonsmooth convex function. We propose a manifold inexact augmented Lagrangian method (MIALM) for the considered problem. The problem is reformulated to a separable form. By utilizing the Moreau envelope, we get a smoothing subproblem … Read more

Superiorization vs. Accelerated Convex Optimization: The Superiorized/Regularized Least-Squares Case

In this paper we conduct a study of both superiorization and optimization approaches for the reconstruction problem of superiorized/regularized solutions to underdetermined systems of linear equations with nonnegativity variable bounds. Specifically, we study a (smoothed) total variation regularized least-squares problem with nonnegativity constraints. We consider two approaches: (a) a superiorization approach that, in contrast to … Read more

Constraint-Preconditioned Krylov Solvers for Regularized Saddle-Point Systems

We consider the iterative solution of regularized saddle-point systems. When the leading block is symmetric and positive semi-definite on an appropriate subspace, Dollar, Gould, Schilders, and Wathen (SIAM J. Matrix Anal. Appl., 28(1), 2006) describe how to apply the conjugate gradient (CG) method coupled with a constraint preconditioner, a choice that has proved to be … Read more

An analysis of the superiorization method via the principle of concentration of measure

The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce … Read more

An Infeasible Interior-point Arc-search Algorithm for Nonlinear Constrained Optimization

In this paper, we propose an infeasible arc-search interior-point algorithm for solving nonlinear programming problems. Most algorithms based on interior-point methods are categorized as line search in the sense that they compute a next iterate on a straight line determined by a search direction which approximates the central path. The proposed arc-search interior-point algorithm uses … Read more

On the Convergence to Stationary Points of Deterministic and Randomized Feasible Descent Directions Methods

This paper studies the class of nonsmooth nonconvex problems in which the difference between a continuously differentiable function and a convex nonsmooth function is minimized over linear constraints. Our goal is to attain a point satisfying the stationarity necessary optimality condition, defined as the lack of feasible descent directions. Although elementary in smooth optimization, this … Read more