Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

In this paper, we consider a class of inexact proximal point methods for convex optimization which allows a relative error tolerance in the approximate solution of each proximal subproblem. By exploiting the special structure of convex optimization problems, we are able to derive surprising complexity bounds for the aforementioned class. As a consequence, we show … Read more

Iteration-complexity of block-decomposition algorithms and the alternating minimization augmented Lagrangian method

In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an … Read more

Double smoothing technique for infinite-dimensional optimization problems with applications to Optimal Control.

In this paper, we propose an efficient technique for solving some infinite-dimensional problems over the sets of functions of time. In our problem, besides the convex point-wise constraints on state variables, we have convex coupling constraints with finite-dimensional image. Hence, we can formulate a finite-dimensional dual problem, which can be solved by efficient gradient methods. … Read more

Complexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for generalized variational inequalities with applications to saddle point and convex optimization problems

In this paper, we consider both a variant of Tseng’s modified forward-backward splitting method and an extension of Korpelevich’s method for solving generalized variational inequalities with Lipschitz continuous operators. By showing that these methods are special cases of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive iteration-complexity bounds for them … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: a Generic Algorithmic Framework

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or … Read more

Aircraft landing problems with aircraft classes

This paper focuses on the aircraft landing problem that is to assign landing times to aircraft approaching the airport under consideration. Each aircraft’s landing time must be in a time interval encompassing a target landing time. If the actual landing time deviates from the target landing time additional costs occur which depend on the amount … Read more

Information Geometry and Primal-Dual Interior-point Algorithms

In this paper, we study polynomial-time interior-point algorithms in view of information geometry. We introduce an information geometric structure for a conic linear program based on a self-concordant barrier function. Riemannian metric is defined with the Hessian of the barrier function. We introduce two connections $\nabla$ and $\nabla^*$ which roughly corresponds to the primal and … Read more

Complexity of the Critical Node Problem over trees

In this paper we deal with the Critical Node Problem (CNP), i.e., the problem of searching for a given number K of nodes in a graph G, whose removal minimizes the number of connections between pairs of nodes in the residual graph. While the NP-completeness of this problem for general graphs has been already established … Read more

Fast Multiple Splitting Algorithms for Convex Optimization

We present in this paper two different classes of general $K$-splitting algorithms for solving finite-dimensional convex optimization problems. Under the assumption that the function being minimized has a Lipschitz continuous gradient, we prove that the number of iterations needed by the first class of algorithms to obtain an $\epsilon$-optimal solution is $O(1/\epsilon)$. The algorithms in … Read more

On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(epsilon^{-2}) evaluations known for the steepest descent … Read more