Convergence of stochastic average approximation for stochastic optimization problems with mixed expectation and per-scenario constraints

We present a framework for ensuring convergence of sample average approximations to stochastic optimization problems that include expectation constraints in addition to per-scenario constraints. Citation Preprint ANL/MCS 1562-1108 Article Download View Convergence of stochastic average approximation for stochastic optimization problems with mixed expectation and per-scenario constraints

A proximal method for composite minimization

We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions … Read more

Convex Relaxations of Non-Convex Mixed Integer Quadratically Constrained Programs: Projected Formulations

A common way to produce a convex relaxation of a Mixed Integer Quadratically Constrained Program (MIQCP) is to lift the problem into a higher dimensional space by introducing variables $Y_{ij}$ to represent each of the products $x_i x_j$ of variables appearing in a quadratic form. One advantage of such extended relaxations is that they can … Read more

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Approximate proximal point algorithms (abbreviated as APPAs) are classical approaches for convex optimization problems and monotone variational inequalities. To solve the subproblems of these algorithms, the projection method takes the iteration in form of $u^{k+1} = P_{\Omega}[u^k-\alpha_k d^k]$. Interestingly, many of them can be paired such that $%\exists \tilde{u}^k, \tilde{u}^k = P_{\Omega}[u^k – \beta_kF(v^k)] = … Read more

Proximal Methods for Nonlinear Programming: Double Regularization and Inexact Subproblems

This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that … Read more

A second derivative SQP method: theoretical issues

Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be … Read more

Necessary conditions for local optimality in d.c. programming

Using $\eps$-subdifferential calculus for difference-of-convex (d.c.) programming, D\”ur proposed a condition sufficient for local optimality, and showed that this condition is not necessary in general. Here it is proved that whenever the convex part is strongly convex, this condition is also necessary. Strong convexity can always be ensured by changing the given d.c. decomposition slightly. … Read more

A globally convergent primal-dual interior-point filter method for nonlinear programming: new filter optimality measures and computational results

In this paper we modify the original primal-dual interior-point filter method proposed in [18] for the solution of nonlinear programming problems. We introduce two new optimality filter entries based on the objective function, and thus better suited for the purposes of minimization, and propose conditions for using inexact Hessians. We show that the global convergence … Read more

PSwarm: A Hybrid Solver for Linearly Constrained Global Derivative-Free Optimization

PSwarm was developed originally for the global optimization of functions without derivatives and where the variables are within upper and lower bounds. The underlying algorithm used is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the (optional) search step of coordinate search, … Read more

Group sparsity via linear-time projection

We present an efficient spectral projected-gradient algorithm for optimization subject to a group one-norm constraint. Our approach is based on a novel linear-time algorithm for Euclidean projection onto the one- and group one-norm constraints. Numerical experiments on large data sets suggest that the proposed method is substantially more efficient and scalable than existing methods. Citation … Read more