Necessary conditions for local optimality in d.c. programming

Using $\eps$-subdifferential calculus for difference-of-convex (d.c.) programming, D\”ur proposed a condition sufficient for local optimality, and showed that this condition is not necessary in general. Here it is proved that whenever the convex part is strongly convex, this condition is also necessary. Strong convexity can always be ensured by changing the given d.c. decomposition slightly. … Read more

A globally convergent primal-dual interior-point filter method for nonlinear programming: new filter optimality measures and computational results

In this paper we modify the original primal-dual interior-point filter method proposed in [18] for the solution of nonlinear programming problems. We introduce two new optimality filter entries based on the objective function, and thus better suited for the purposes of minimization, and propose conditions for using inexact Hessians. We show that the global convergence … Read more

PSwarm: A Hybrid Solver for Linearly Constrained Global Derivative-Free Optimization

PSwarm was developed originally for the global optimization of functions without derivatives and where the variables are within upper and lower bounds. The underlying algorithm used is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the (optional) search step of coordinate search, … Read more

Group sparsity via linear-time projection

We present an efficient spectral projected-gradient algorithm for optimization subject to a group one-norm constraint. Our approach is based on a novel linear-time algorithm for Euclidean projection onto the one- and group one-norm constraints. Numerical experiments on large data sets suggest that the proposed method is substantially more efficient and scalable than existing methods. CitationTechnical … Read more

A stochastic algorithm for function minimization

Focusing on what an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. Its principle is discussed in the theoretical model of DSZ algorithm, from which we present a practical … Read more

A SECOND DERIVATIVE SQP METHOD WITH IMPOSED DESCENT

Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be … Read more

A Matrix-free Algorithm for Equality Constrained Optimization Problems with Rank-deficient Jacobians

We present a line search algorithm for large-scale constrained optimization that is robust and efficient even for problems with (nearly) rank-deficient Jacobian matrices. The method is matrix-free (i.e., it does not require explicit storage or factorizations of derivative matrices), allows for inexact step computations, and is applicable for nonconvex problems. The main components of the … Read more

A Primal-Dual Augmented Lagrangian

Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both … Read more

Concave programming for minimizing the zero-norm over polyhedral sets

Given a non empty polyhedral set, we consider the problem of finding a vector belonging to it and having the minimum number of nonzero components, i.e., a feasible vector with minimum zero-norm. This nonsmooth combinatorial optimization problem is NP-Hard and arises in various fields such as machine learning, pattern recognition, signal processing. We propose two … Read more

On mutual impact of numerical linear algebra and large-scale optimization with focus on interior point methods

The solution of KKT systems is ubiquitous in optimization methods and often dominates the computation time, especially when large-scale problems are considered. Thus, the effective implementation of such methods is highly dependent on the availability of effective linear algebra algorithms and software, that are able, in turn, to take into account specific needs of optimization. … Read more