Deriving the convex hull of a polynomial partitioning set through lifting and projection

Relaxations of the bilinear term, $x_1x_2=x_3$, play a central role in constructing relaxations of factorable functions. This is because they can be used directly to relax products of functions with known relaxations. In this paper, we provide a compact, closed-form description of the convex hull of this and other more general bivariate monomial terms (which … Read more

Exploiting derivative-free local searches in DIRECT-type algorithms for global optimization

In this paper we consider bound constrained global optimization problems where first-order derivatives of the objective function can be neither computed nor approximated explicitly. For the solution of such problems the DIRECT Algorithm has been proposed which has strong convergence properties and a good ability to locate promising regions of the feasible domain. However, the … Read more

A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework

We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which … Read more

SQP Methods for Parametric Nonlinear Optimization

Sequential quadratic programming (SQP) methods are known to be effi- cient for solving a series of related nonlinear optimization problems because of desirable hot and warm start properties–a solution for one problem is a good estimate of the solution of the next. However, standard SQP solvers contain elements to enforce global convergence that can interfere … Read more

Nonmonotone GRASP

A Greedy Randomized Adaptive Search Procedure (GRASP) is an iterative multistart metaheuristic for difficult combinatorial optimization problems. Each GRASP iteration consists of two phases: a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Repeated applications … Read more

Generating subtour constraints for the TSP from pure integer solutions

The traveling salesman problem (TSP) is one of the most prominent combinatorial optimization problems. Given a complete graph G = (V, E) and nonnegative real edge distances d, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is … Read more

Forward-backward truncated Newton methods for convex composite optimization

This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the … Read more

A derivative-free trust-funnel method for equality-constrained nonlinear optimization

In this work, we look into new derivative-free methods to solve equality-constrained optimization problems. Of particular interest, are the trust-region techniques, which have been investigated for the unconstrained and bound-constrained cases. For solving equality-constrained optimization problems, we introduce a derivative-free adaptation of the trust-funnel method combined with a self-correcting geometry scheme and present some encouraging … Read more

Provable Low-Rank Tensor Recovery

In this paper, we rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) and/or grossly corrupted (tensor robust principal analysis) observations still suffer from the lack of theoretical guarantees, although they have been used in various recent applications and … Read more

A trust-region derivative-free algorithm for constrained optimization

We propose a trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available. In each iteration, the objective function is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region. Since the constraints are handled … Read more