A proximal gradient method for ensemble density functional theory

The ensemble density functional theory is valuable for simulations of metallic systems due to the absence of a gap in the spectrum of the Hamiltonian matrices. Although the widely used self-consistent field iteration method can be extended to solve the minimization of the total energy functional with respect to orthogonality constraints, there is no theoretical … Read more

New results on subgradient methods for strongly convex optimization problems with a unified analysis

We develop subgradient- and gradient-based methods for minimizing strongly convex functions under a notion which generalizes the standard Euclidean strong convexity. We propose a unifying framework for subgradient methods which yields two kinds of methods, namely, the Proximal Gradient Method (PGM) and the Conditional Gradient Method (CGM), unifying several existing methods. The unifying framework provides … Read more

An extension of the projected gradient method to a Banach space setting with application in structural topology optimization

For the minimization of a nonlinear cost functional under convex constraints the relaxed projected gradient process is a well known method. The analysis is classically performed in a Hilbert space. We generalize this method to functionals which are differentiable in a Banach space. The search direction is calculated by a quadratic approximation of the cost … Read more

Partial Relaxation of Equality-constrained Programs

This paper presents a reformulation that is a natural “by-product” of the ‘variable endogenization’ process for equality-constrained programs. The method results a partial relaxation of the constraints which in turn confers some computational advantages. A fully-annotated example illustrates the technique and presents some comparative numerical results. CitationSiwale, I.: Partial Relaxation of Equality-constrained Programs. Technical Report … Read more

Optimality and complexity for constrained optimization problems with nonconvex regularization

In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set and the objective function has a nonsmooth, nonconvex regularizer. Such regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz $L_p$ penalties as special cases. Using the theory of the generalized directional … Read more

Copositivity for second-order optimality conditions in general smooth optimization problems

Second-order local optimality conditions involving copositivity of the Hessian of the Lagrangian on the reduced linearization cone have the advantage that there is only a small gap between sufficient (the Hessian is strictly copositive) and necessary (the Hessian is copositive) conditions. In this respect, this is a proper generalization of convexity of the Lagrangian. We … Read more

A cone-continuity constraint qualification and algorithmic consequences

Every local minimizer of a smooth constrained optimization problem satisfies the sequential Approximate Karush-Kuhn-Tucker (AKKT) condition. This optimality condition is used to define the stopping criteria of many practical nonlinear programming algorithms. It is natural to ask for conditions on the constraints under which AKKT implies KKT. These conditions will be called Strict Constraint Qualifications … Read more

An external penalty-type method for multicriteria

We propose an extension of the classical real-valued external penalty method to the multicriteria optimization setting. As its single objective counterpart, it also requires an external penalty function for the constraint set, as well as an exogenous divergent sequence of nonnegative real numbers, the so-called penalty parameters, but, differently from the scalar procedure, the vector-valued … Read more

On the Performance of SQP Methods for Nonlinear Optimization

This paper concerns some practical issues associated with the formulation of sequential quadratic programming (SQP) methods for large-scale nonlinear optimization. SQP methods find an approximate solution of a sequence of quadratic programming (QP) subproblems in which a quadratic model of the objective function is minimized subject to the linearized constraints. Extensive numerical results are given … Read more