ε-Optimality in Reverse Optimization

The purpose of this paper is to completely characterize the global approximate optimality (ε-optimality) in reverse convex optimization under the general nonconvex constraint “h(x) ≥ 0”. The main condition presented is obtained in terms of Fenchel’s ε-subdifferentials thanks to El Maghri’s ε-efficiency in difference vector optimization [J. Glob. Optim. 61 (2015) 803–812], after converting the … Read more

Novel stepsize for some accelerated and stochastic optimization methods

New first-order methods now need to be improved to keep up with the constant developments in machine learning and mathematics. They are commonly used methods to solve optimization problems. Among them, the algorithm branch based on gradient descent has developed rapidly with good results achieved. Not out of that trend, in this article, we research … Read more

The stochastic Ravine accelerated gradient method with general extrapolation coefficients

Abstract: In a real Hilbert space domain setting, we study the convergence properties of the stochastic Ravine accelerated gradient method for convex differentiable optimization. We consider the general form of this algorithm where the extrapolation coefficients can vary with each iteration, and where the evaluation of the gradient is subject to random errors. This general … Read more

On Averaging and Extrapolation for Gradient Descent

\(\) This work considers the effect of averaging, and more generally extrapolation, of the iterates of gradient descent in smooth convex optimization. After running the method, rather than reporting the final iterate, one can report either a convex combination of the iterates (averaging) or a generic combination of the iterates (extrapolation). For several common stepsize … Read more

A novel adaptive stepsize for proximal gradient method solving mixed variational inequality problems and applications

In this paper, we propose a new algorithm for solving monotone mixed variational inequality problems in real Hilbert spaces based on proximal gradient method. Our new algorithm use a novel adaptive stepsize which is proved to be increasing to a positive limitation. The weak convergence and strong convergence with R-linear rate of our new algorithm … Read more

Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

This paper proposes a stochastic proximal point method to solve a stochastic convex composite optimization problem. High probability results in stochastic optimization typically hinge on restrictive assumptions on the stochastic gradient noise, for example, sub-Gaussian distributions. Assuming only weak conditions such as bounded variance of the stochastic gradient, this paper establishes a low sample complexity … Read more

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

\(\) We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems … Read more

Accurate and Warm-Startable Linear Cutting-Plane Relaxations for ACOPF

We present a linear cutting-plane relaxation approach that rapidly proves tight lower bounds for the Alternating Current Optimal Power Flow Problem (ACOPF). Our method leverages outer-envelope linear cuts for well-known second-order cone relaxations for ACOPF along with modern cut management techniques. These techniques prove effective on a broad family of ACOPF instances, including the largest … Read more

Non-facial exposedness of copositive cones over symmetric cones

In this paper, we consider copositive cones over symmetric cones and show that they are never facially exposed when the underlying cone has dimension at least 2. We do so by explicitly exhibiting a non-exposed extreme ray. Our result extends the known fact that the cone of copositive matrices over the nonnegative orthant is not … Read more