Generalized Uniformly Optimal Methods for Nonlinear Programming

In this paper, we present a generic framework to extend existing uniformly optimal convex programming algorithms to solve more general nonlinear, possibly nonconvex, optimization problems. The basic idea is to incorporate a local search step (gradient descent or Quasi-Newton iteration) into these uniformly optimal convex programming methods, and then enforce a monotone decreasing property of … Read more

Vanishing Price of Anarchy in Large Coordinative Nonconvex Optimization

We focus on a class of nonconvex cooperative optimization problems that involve multiple participants. We study the duality framework and provide geometric and analytic character- izations of the duality gap. The dual problem is related to a market setting in which each participant pursuits self interests at a given price of common goods. The duality … Read more

Optimization of multiple receivers solar power tower systems

In this article a new procedure to optimize the design of a solar power tower system with multiple receivers is presented. The variables related to the receivers (height, aperture tilt angle, azimuth angle and aperture size) as well as the heliostat field layout are optimized seeking to minimize the levelized cost of thermal energy. This … Read more

Convex Hull Pricing in Electricity Markets: Formulation, Analysis, and Implementation Challenges

Recent widespread interest in Convex Hull Pricing has not been accompanied by an equally broad understanding of the method. This paper seeks to narrow the gap between enthusiasm and comprehension. The connection between Convex Hull Pricing and basic electricity market clearing processes is clearly developed, and a new formulation of the pricing problem is presented. … Read more

A Nonmonotone Approach without Differentiability Test for Gradient Sampling Methods

Recently, optimization problems involving nonsmooth and locally Lipschitz functions have been subject of investigation, and an innovative method known as Gradient Sampling has gained attention. Although the method has shown good results for important real problems, some drawbacks still remain unexplored. This study suggests modifications to the gradient sampling class of methods in order to … Read more

Copositivity for second-order optimality conditions in general smooth optimization problems

Second-order local optimality conditions involving copositivity of the Hessian of the Lagrangian on the reduced linearization cone have the advantage that there is only a small gap between sufficient (the Hessian is strictly copositive) and necessary (the Hessian is copositive) conditions. In this respect, this is a proper generalization of convexity of the Lagrangian. We … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that only stochastic information of the gradients of the objective function is available via a stochastic first-order oracle (SFO). Firstly, we propose a general framework of stochastic quasi-Newton methods for solving nonconvex stochastic optimization. The proposed framework extends the classic … Read more

On the Global Optimality for Linear Constrained Rank Minimization Problem

The rank minimization with linear equality constraints has two closely related models, the low rank approximation model, that is to find the best rank-k approximation of a matrix satisfying the linear constraints, and its corresponding factorization model. The latter one is an unconstrained nonlinear least squares problem and hence enjoys a few fast first-order methods … Read more

Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method

We propose a limited memory steepest descent (LMSD) method for solving unconstrained optimization problems. As a steepest descent method, the step computation in each iteration requires the evaluation of a gradient of the objective function and the calculation of a scalar step size only. When employed to solve certain convex problems, our method reduces to … Read more

A Trust Region Algorithm with a Worst-Case Iteration Complexity of ${\cal O}(\epsilon^{-3/2})$ for Nonconvex Optimization

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $\bar\epsilon \in (0,\infty)$, the algorithm requires at most $\mathcal{O}(\epsilon^{-3/2})$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $\epsilon \in (0,\bar\epsilon]$. This improves upon the $\mathcal{O}(\epsilon^{-2})$ bound known to hold for … Read more