A second order dynamical approach with variable damping to nonconvex smooth minimization

We investigate a second order dynamical system with variable damping in connection with the minimization of a nonconvex differentiable function. The dynamical system is formulated in the spirit of the differential equation which models Nesterov’s accelerated convex gradient method. We show that the generated trajectory converges to a critical point, if a regularization of the … Read more

A comparison of methods for traversing non-convex regions in optimization problems

This paper considers again the well-known problem of dealing with non-convex regions during the minimization of a nonlinear function F(x) by Newton-like methods. The proposal made here involves a curvilinear search along an approximation to the continuous steepest descent path defined by the solution of the ODE dx/dt = -grad F(x). The algorithm we develop … Read more

Strong Convex Nonlinear Relaxations of the Pooling Problem

We investigate new convex relaxations for the pooling problem, a classic nonconvex production planning problem in which input materials are mixed in intermediate pools, with the outputs of these pools further mixed to make output products meeting given attribute percentage requirements. Our relaxations are derived by considering a set which arises from the formulation by … Read more

Inexact Successive Quadratic Approximation for Regularized Optimization

Successive quadratic approximations, or second-order proximal methods, are useful for minimizing functions that are a sum of a smooth part and a convex, possibly nonsmooth part that promotes regularization. Most analyses of iteration complexity focus on the special case of proximal gradient method, or accelerated variants thereof. There have been only a few studies of … Read more

A Branch-and-Bound based Algorithm for Nonconvex Multiobjective Optimization

A new branch-and-bound based algorithm for smooth nonconvex multiobjective optimization problems with convex constraints is presented. The algorithm computes an $(\varepsilon,\delta)$-approximation of all globally optimal solutions. We introduce the algorithm which uses selection rules, discarding and termination tests. The discarding tests are the most important aspect, as they examine in different ways whether a box … Read more

ADMM for Multiaffine Constrained Optimization

We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the … Read more

Concise Complexity Analyses for Trust-Region Methods

Concise complexity analyses are presented for simple trust region algorithms for solving unconstrained optimization problems. In contrast to a traditional trust region algorithm, the algorithms considered in this paper require certain control over the choice of trust region radius after any successful iteration. The analyses highlight the essential algorithm components required to obtain certain complexity … Read more

A Progressive Batching L-BFGS Method for Machine Learning

The standard L-BFGS method relies on gradient approximations that are not dominated by noise, so that search directions are descent directions, the line search is reliable, and quasi-Newton updating yields useful quadratic models of the objective function. All of this appears to call for a full batch approach, but since small batch sizes give rise … Read more

Regional Complexity Analysis of Algorithms for Nonconvex Smooth Optimization

A strategy is proposed for characterizing the worst-case performance of algorithms for solving nonconvex smooth optimization problems. Contemporary analyses characterize worst-case performance by providing, under certain assumptions on an objective function, an upper bound on the number of iterations (or function or derivative evaluations) required until a pth-order stationarity condition is approximately satisfied. This arguably … Read more

Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs

This paper analyzes the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the objective function is of the form f + h where f is a differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with a bounded domain. … Read more