First-order Methods of Smooth Convex Optimization with Inexact Oracle

We introduce the notion of inexact first-order oracle and analyze the behaviour of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau-Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast … Read more

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems

We show that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomial of degree four (or higher even degree) is globally convex. This solves a problem that has been open since 1992 when N. Z. Shor asked for the complexity of deciding convexity for quartic … Read more

Minimum cost subset selection with two competing agents

We address an optimization problem in which two agents, each with a set of weighted items, compete in order to minimize the total weight of their solution sets. The latter are built according to a sequential game consisting in a fixed number of rounds. In every round each agent submits one item that may be … Read more

Bundle-type methods uniformly optimal for smooth and nonsmooth convex optimization

The bundle-level method and their certain variants are known to exhibit an optimal rate of convergence, i.e., ${\cal O}(1/\sqrt{t})$, and also excellent practical performance for solving general non-smooth convex programming (CP) problems. However, this rate of convergence is significantly worse than the optimal one for solving smooth CP problems, i.e., ${\cal O}(1/t^2)$. In this paper, … Read more

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

In this paper, we consider a class of inexact proximal point methods for convex optimization which allows a relative error tolerance in the approximate solution of each proximal subproblem. By exploiting the special structure of convex optimization problems, we are able to derive surprising complexity bounds for the aforementioned class. As a consequence, we show … Read more

Iteration-complexity of block-decomposition algorithms and the alternating minimization augmented Lagrangian method

In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an … Read more

Double smoothing technique for infinite-dimensional optimization problems with applications to Optimal Control.

In this paper, we propose an efficient technique for solving some infinite-dimensional problems over the sets of functions of time. In our problem, besides the convex point-wise constraints on state variables, we have convex coupling constraints with finite-dimensional image. Hence, we can formulate a finite-dimensional dual problem, which can be solved by efficient gradient methods. … Read more

Complexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for generalized variational inequalities with applications to saddle point and convex optimization problems

In this paper, we consider both a variant of Tseng’s modified forward-backward splitting method and an extension of Korpelevich’s method for solving generalized variational inequalities with Lipschitz continuous operators. By showing that these methods are special cases of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive iteration-complexity bounds for them … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: a Generic Algorithmic Framework

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or … Read more

Aircraft landing problems with aircraft classes

This paper focuses on the aircraft landing problem that is to assign landing times to aircraft approaching the airport under consideration. Each aircraft’s landing time must be in a time interval encompassing a target landing time. If the actual landing time deviates from the target landing time additional costs occur which depend on the amount … Read more