Relatively-Smooth Convex Optimization by First-Order Methods, and Applications

The usual approach to developing and analyzing first-order methods for smooth convex optimization assumes that the gradient of the objective function is uniformly smooth with some Lipschitz constant L. However, in many settings the differentiable convex function f(.) is not uniformly smooth — for example in D-optimal design where f(x):=-ln det(HXH^T), or even the univariate … Read more

Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show … Read more

Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization

High-order optimality conditions for convexly-constrained nonlinear optimization problems are analyzed. A corresponding (expensive) measure of criticality for arbitrary order is proposed and extended to define high-order $\epsilon$-approximate critical points. This new measure is then used within a conceptual trust-region algorithm to show that, if derivatives of the objective function up to order $q \geq 1$ … Read more

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs

We study conditions under which the objective functions of a multi-objective 0-1 integer linear program guarantee the existence of an ideal point, meaning the existence of a feasible solution that simultaneously minimizes all objectives. In addition, we study the complexity of recognizing whether a set of objective functions satisfies these conditions: we show that it … Read more

Complexity bounds for primal-dual methods minimizing the model of objective function

We provide Frank-Wolfe ($\equiv$ Conditional Gradients) method with a convergence analysis allowing to approach a primal-dual solution of convex optimization problem with composite objective function. Additional properties of complementary part of the objective (strong convexity) significantly accelerate the scheme. We also justify a new variant of this method, which can be seen as a trust-region … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ($\SFO$). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case … Read more

A Subgradient Method for Free Material Design

A small improvement in the structure of the material could save the manufactory a lot of money. The free material design can be formulated as an optimization problem. However, due to its large scale, second-order methods cannot solve the free material design problem in reasonable size. We formulate the free material optimization (FMO) problem into … Read more

Complexity of Routing Problems with Release Dates and Deadlines

The desire of companies to offer same-day delivery leads to interesting new routing problems. We study the complexity of a setting in which a delivery to a customer is guaranteed to take place within a pre-specified time after the customer places the order. Thus, an order has a release date (when the order is placed) … Read more

Algorithms for stochastic optimization with expectation constraints

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with an expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the expectation constraint on devision variables. We show that … Read more

Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomial-time algorithm. Applying the complexity result to sparse … Read more