A Unified Approach for Maximizing Continuous $\gamma$-weakly DR-submodular Functions

\(\) This paper presents a unified approach for maximizing continuous \(\gamma\)-weakly DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the convex feasible region. We consider settings where the oracle provides access to either … Read more

Novel stepsize for some accelerated and stochastic optimization methods

New first-order methods now need to be improved to keep up with the constant developments in machine learning and mathematics. They are commonly used methods to solve optimization problems. Among them, the algorithm branch based on gradient descent has developed rapidly with good results achieved. Not out of that trend, in this article, we research … Read more

The stochastic Ravine accelerated gradient method with general extrapolation coefficients

Abstract: In a real Hilbert space domain setting, we study the convergence properties of the stochastic Ravine accelerated gradient method for convex differentiable optimization. We consider the general form of this algorithm where the extrapolation coefficients can vary with each iteration, and where the evaluation of the gradient is subject to random errors. This general … Read more

A Jacobi-type Newton method for Nash equilibrium problems with descent guarantees

A common strategy for solving an unconstrained two-player Nash equilibrium problem with continuous variables is applying Newton’s method to the system obtained by the corresponding first-order necessary optimality conditions. However, when taking into account the game dynamics, it is not clear what is the goal of each player when considering they are taking their current … Read more

A Polyhedral Characterization of Linearizable Quadratic Combinatorial Optimization Problems

We introduce a polyhedral framework for characterizing instances of quadratic combinatorial optimization programs (QCOPs) that are linearizable, meaning that the quadratic objective can be equivalently rewritten as linear in such a manner that preserves the objective function value at all feasible solutions. In particular, we show that an instance is linearizable if and only if … Read more

Uncertainty Quantification for Multiobjective Stochastic Convex Quadratic Programs

A multiobjective stochastic convex quadratic program (MOSCQP) is a multiobjective optimization problem with convex quadratic objectives that are observed with stochastic error. MOSCQP is a useful problem formulation arising, for example, in model calibration and nonlinear system identification when a single regression model combines data from multiple distinct sources, resulting in a multiobjective least squares … Read more

Problem-Parameter-Free Decentralized Nonconvex Stochastic Optimization

Existing decentralized algorithms usually require knowledge of problem parameters for updating local iterates. For example, the hyperparameters (such as learning rate) usually require the knowledge of Lipschitz constant of the global gradient or topological information of the communication networks, which are usually not accessible in practice. In this paper, we propose D-NASA, the first algorithm … Read more

A novel adaptive stepsize for proximal gradient method solving mixed variational inequality problems and applications

In this paper, we propose a new algorithm for solving monotone mixed variational inequality problems in real Hilbert spaces based on proximal gradient method. Our new algorithm use a novel adaptive stepsize which is proved to be increasing to a positive limitation. The weak convergence and strong convergence with R-linear rate of our new algorithm … Read more

Riemannian trust-region methods for strict saddle functions with complexity guarantees

The difficulty of minimizing a nonconvex function is in part explained by the presence of saddle points. This slows down optimization algorithms and impacts worst-case complexity guarantees. However, many nonconvex problems of interest possess a favorable structure for optimization, in the sense that saddle points can be escaped efficiently by appropriate algorithms. This strict saddle … Read more

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

\(\) We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems … Read more