A New Error Bound Result for Generalized Nash Equilibrium Problems and its Algorithmic Application

We present a new algorithm for the solution of Generalized Nash Equilibrium Problems. This hybrid method combines the robustness of a potential reduction algorithm and the local quadratic convergence rate of the LP-Newton method. We base our local convergence theory on an error bound and provide a new sufficient condition for it to hold that … Read more

An interior point method with a primal-dual quadratic barrier penalty function for nonlinear semidefinite programming

In this paper, we consider an interior point method for nonlinear semidefinite programming. Yamashita, Yabe and Harada presented a primal-dual interior point method in which a nondifferentiable merit function was used. By using shifted barrier KKT conditions, we propose a differentiable primal-dual merit function within the framework of the line search strategy, and prove the … Read more

A Framework of Constraint Preserving Update Schemes for Optimization on Stiefel Manifold

This paper considers optimization problems on the Stiefel manifold $X^TX=I_p$, where $X\in \mathbb{R}^{n \times p}$ is the variable and $I_p$ is the $p$-by-$p$ identity matrix. A framework of constraint preserving update schemes is proposed by decomposing each feasible point into the range space of $X$ and the null space of $X^T$. While this general framework … Read more

Alternating Proximal Gradient Method for Convex Minimization

In this paper, we propose an alternating proximal gradient method that solves convex minimization problems with three or more separable blocks in the objective function. Our method is based on the framework of alternating direction method of multipliers. The main computational effort in each iteration of the proposed method is to compute the proximal mappings … Read more

Proximal Point Method for Minimizing Quasiconvex Locally Lipschitz Functions on Hadamard Manifolds

In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex locally Lipschitz objective functions on Hadamard manifolds. To reach this goal, we use the concept of Clarke subdifferential on Hadamard manifolds and assuming that the function is bounded from below, we prove the global convergence of the … Read more

Globally Convergent Evolution Strategies and CMA-ES

In this paper we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombine the parents by means of a weighted sum, around which the offsprings are computed by … Read more

Inexact Restoration method for Derivative-Free Optimization with smooth constraints

A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only … Read more

Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization

Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate … Read more

A short note on the global convergence of the unmodified PRP method

It is well-known that the search direction generated by the standard (unmodified) PRP nonlinear conjugate gradient method is not necessarily a descent direction of the objective function, which brings difficulty for its global convergence for general functions. However, to our surprise, it is easily proved in this short note that the unmodified PRP method still … Read more

Constrained Derivative-Free Optimization on Thin Domains

Many derivative-free methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penalty-like strategies. When the constraints are computationally inexpensive but highly nonlinear, these methods spend many potentially expensive objective function evaluations motivated by the difficulties of improving feasibility. … Read more