Compact Representations of Structured BFGS Matrices

For general large-scale optimization problems compact representations exist in which recursive quasi-Newton update formulas are represented as compact matrix factorizations. For problems in which the objective function contains additional structure, so-called structured quasi-Newton methods exploit available second-derivative information and approximate unavailable second derivatives. This article develops the compact representations of two structured Broyden-Fletcher-Goldfarb-Shanno update formulas. … Read more

A Finitely Convergent Disjunctive Cutting Plane Algorithm for Bilinear Programming

\(\) In this paper we present and analyze a finitely-convergent disjunctive cutting plane algorithm to obtain an \(\epsilon\)-optimal solution or detect infeasibility of a general nonconvex continuous bilinear program. While the cutting planes are obtained in a manner similar to Saxena, Bonami, and Lee [Math. Prog. 130: 359–413, 2011] and Fampa and Lee [J. Global … Read more

Sequential Convexification of a Bilinear Set

We present a sequential convexification procedure to derive, in the limit, a set arbitrary close to the convex hull of $\epsilon$-feasible solutions to a general nonconvex continuous bilinear set. Recognizing that bilinear terms can be represented with a finite number nonlinear nonconvex constraints in the lifted matrix space, our procedure performs a sequential convexification with … Read more

Strong Evaluation Complexity Bounds for Arbitrary-Order Optimization of Nonconvex Nonsmooth Composite Functions

We introduce the concept of strong high-order approximate minimizers for nonconvex optimization problems. These apply in both standard smooth and composite non-smooth settings, and additionally allow convex or inexpensive constraints. An adaptive regularization algorithm is then proposed to find such approximate minimizers. Under suitable Lipschitz continuity assumptions, whenever the feasible set is convex, it is … Read more

A primal-dual modified log-barrier method for inequality constrained nonlinear optimization

We present a primal-dual modified log-barrier algorithm to solve inequality constrained nonlinear optimization problems. Basically, the algorithm is a Newton-like method applied to a perturbation of the optimality system that follows from a reformulation of the initial problem by introducing a modified log-barrier function to handle inequality constraints. The algorithm uses an outer/inner iteration scheme … Read more

Binary Optimal Control by Trust-Region Steepest Descent

We present a trust-region steepest descent method for dynamic optimal control problems with binary-valued integrable control functions. Our method interprets the control function as an indicator function of a measurable set and makes set-valued adjustments derived from the sublevel sets of a topological gradient function. By combining this type of update with a trust-region framework, … Read more

Weakly Homogeneous Optimization Problems

This paper investigates a new class of optimization problems whose objective functions are weakly homogeneous relative to the constrain sets. Two sufficient conditions for nonemptiness and boundedness of solution sets are established. We also study linear parametric problems and upper semincontinuity of the solution map. Article Download View Weakly Homogeneous Optimization Problems

Nonconvex Constrained Optimization by a Filtering Branch and Bound

A major difficulty in optimization with nonconvex constraints is to find feasible solutions. As simple examples show, the alphaBB-algorithm for single-objective optimization may fail to compute feasible solutions even though this algorithm is a popular method in global optimization. In this work, we introduce a filtering approach motivated by a multiobjective reformulation of the constrained … Read more

Exploiting problem structure in derivative free optimization

A structured version of derivative-free random pattern search optimization algorithms is introduced which is able to exploit coordinate partially separable structure (typically associated with sparsity) often present in unconstrained and bound-constrained optimization problems. This technique improves performance by orders of magnitude and makes it possible to solve large problems that otherwise are totally intractable by … Read more

On the convexification of constrained quadratic optimization problems with indicator variables

Motivated by modern regression applications, in this paper, we study the convexification of quadratic optimization problems with indicator variables and combinatorial constraints on the indicators. Unlike most of the previous work on convexification of sparse regression problems, we simultaneously consider the nonlinear objective, indicator variables, and combinatorial constraints. We prove that for a separable quadratic … Read more