A necessary condition for the guarantee of the superiorization method

We study a method that involves principally convex feasibility-seeking and makes secondary efforts of objective function value reduction. This is the well-known superiorization method (SM), where the iterates of an asymptotically convergent iterative feasibility-seeking algorithm are perturbed by objective function nonascent steps. We investigate the question under what conditions a sequence generated by an SM … Read more

Facial structure of copositive and completely positive cones over a second-order cone

We classify the faces of copositive and completely positive cones over a second-order cone and investigate their dimension and exposedness properties. Then we compute two parameters related to chains of faces of both cones. At the end, we discuss some possible extensions of the results with a view toward analyzing the facial structure of general … Read more

Descent Scheme for a Class of Bilevel Programming Problems

In this paper, a class of bilevel programming problems is studied, in which the lower level is a quadratic programming problem, and the upper level problem consists of a nonlinear objective function with coupling constraints. An iterative process is developed to generate a sequence of points, which converges to the solution of this problem. In … Read more

Stable Set Polytopes with Rank |V(G)|/3 for the Lovász-Schrijver SDP Operator

We study the lift-and-project rank of the stable set polytope of graphs with respect to the Lovász–Schrijver SDP operator \( \text{LS}_+ \) applied to the fractional stable set polytope. In particular, we show that for every positive integer \( \ell \), the smallest possible graph with \( \text{LS}_+ \)-rank \( \ell \) contains \( 3\ell … Read more

Projected proximal gradient trust-region algorithm for nonsmooth optimization

We consider trust-region methods for solving optimization problems where the objective is the sum of a smooth, nonconvex function and a nonsmooth, convex regularizer. We extend the global convergence theory of such methods to include worst-case complexity bounds in the case of unbounded model Hessian growth, and introduce a new, simple nonsmooth trust-region subproblem solver … Read more

Unifying restart accelerated gradient and proximal bundle methods

This paper presents a novel restarted version of Nesterov’s accelerated gradient method and establishes its optimal iteration-complexity for solving convex smooth composite optimization problems. The proposed restart accelerated gradient method is shown to be a specific instance of the accelerated inexact proximal point framework introduced in “An accelerated hybrid proximal extragradient method for convex optimization … Read more

Primal-dual proximal bundle and conditional gradient methods for convex problems

This paper studies the primal-dual convergence and iteration-complexity of proximal bundle methods for solving nonsmooth problems with convex structures. More specifically, we develop a family of primal-dual proximal bundle methods for solving convex nonsmooth composite optimization problems and establish the iteration-complexity in terms of a primal-dual gap. We also propose a class of proximal bundle … Read more

Some Unified Theory for Variance Reduced Prox-Linear Methods

This work considers the nonconvex, nonsmooth problem of minimizing a composite objective of the form $f(g(x))+h(x)$ where the inner mapping $g$ is a smooth finite summation or expectation amenable to variance reduction. In such settings, prox-linear methods can enjoy variance-reduced speed-ups despite the existence of nonsmoothness. We provide a unified convergence theory applicable to a … Read more

A subgradient splitting algorithm for optimization on nonpositively curved metric spaces

Many of the primal ingredients of convex optimization extend naturally from Euclidean to Hadamard spaces — nonpositively curved metric spaces like Euclidean, Hilbert, and hyperbolic spaces, metric trees, and more general CAT(0) cubical complexes. Linear structure, however, and the duality theory it supports are absent. Nonetheless, we introduce a new type of subgradient for convex … Read more

Inexact FISTA-like Methods with Adaptive Backtracking

Accelerated proximal gradient methods have become a useful tool in large-scale convex optimization, specially for variational regularization with non-smooth priors. Prevailing convergence analysis considers that users can perform the proximal and the gradient steps exactly. Still, in some practical applications, the proximal or the gradient steps must be computed inexactly, which can harm convergence speed … Read more