A Subgradient Projection Method with Outer Approximation for Solving Semidefinite Programming Problems

We explore the combination of subgradient projection with outer approximation to solve semidefinite programming problems. We compare several ways to construct outer approximations using the problem structure. The resulting approach enjoys the strengths of both subgradient projection and outer approximation methods. Preliminary computational results on the semidefinite programming relaxations of graph partitioning and max-cut show … Read more

Projection onto hyperbolicity cones and beyond: a dual Frank-Wolfe approach

We discuss the problem of projecting a point onto an arbitrary hyperbolicity cone from both theoretical and numerical perspectives. While hyperbolicity cones are furnished with a generalization of the notion of eigenvalues, obtaining closed form expressions for the projection operator as in the case of semidefinite matrices is an elusive endeavour. To address that we … Read more

Complexity of Adagrad and other first-order methods for nonconvex optimization problems with bounds and convex constraints

A parametric class of trust-region algorithms for constrained nonconvex optimization is analyzed, where the objective function is never computed. By defining appropriate first-order stationarity criteria, we are able to extend the Adagrad method to the newly considered problem and retrieve the standard complexity rate of the projected gradient method that uses both the gradient and … Read more

Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization

\(\) This paper considers the problem of minimizing the sum of a smooth function and the Schatten-\(p\) norm of the matrix. Our contribution involves proposing accelerated iteratively reweighted nuclear norm methods designed for solving the nonconvex low-rank minimization problem. Two major novelties characterize our approach. Firstly, the proposed method possesses a rank identification property, enabling … Read more

An Extended Validity Domain for Constraint Learning

We consider embedding a predictive machine-learning model within a prescriptive optimization problem. In this setting, called constraint learning, we study the concept of a validity domain, i.e., a constraint added to the feasible set, which keeps the optimization close to the training data, thus helping to ensure that the computed optimal solution exhibits less prediction … Read more

An novel adaptive inertial algorithm for solving bilevel variational inequalities with pseudomonotone multivalued operators

This paper aims to develop an adaptive inertial algorithm for solving bilevel variational inequalities with multivalued pseudomonotone operators in real Hilbert spaces and establish its strong convergence property. The algorithm does not need to know the prior information of the Lipschitz constants and strong monotonicity coefficients of the associated mappings, incorporates inertial techniques and involves … Read more

Stackelberg Games with k-Submodular Function under Distributional Risk-Receptiveness and Robustness

\(\) We study submodular optimization in adversarial context, applicable to machine learning problems such as feature selection using data susceptible to uncertainties and attacks. We focus on Stackelberg games between an attacker (or interdictor) and a defender where the attacker aims to minimize the defender’s objective of maximizing a k-submodular function. We allow uncertainties arising … Read more

Exploiting Overlap Information in Chance-constrained Program with Random Right-hand Side

We consider the chance-constrained program (CCP) with random right-hand side under a finite discrete distribution. It is known that the standard mixed integer linear programming (MILP) reformulation of the CCP is generally difficult to solve by general-purpose solvers as the branch-and-cut search trees are enormously large, partly due to the weak linear programming relaxation. In … Read more

Composite optimization models via proximal gradient method with increasing adaptive stepsizes

We first consider the convex composite optimization models with locally Lipschitz condition imposed on the gradient of the differentiable term. The classical method which is proximal gradient will be studied with our new strategy of stepsize selection. Our proposed stepsize can be computed conveniently by explicit forms. The sequence of our stepsizes is proved to … Read more

Robustness Analysis for Adaptive Optimization With Application to Industrial Decarbonization in the Netherlands

Robustness analysis assesses the performance of a particular solution under variation in the input data. This is distinct from sensitivity analysis, which assesses how variation in the input data changes a model’s optimal solution. For risk assessment purposes, robustness analysis has more practical value than sensitivity analysis. This is because sensitivity analysis, when applied to … Read more