$\varepsilon- Subdifferential of Set-valued Map and Its Application

In this paper, firstly, the concept of $\varepsilon-$strictly efficient subdifferential for set-valued map is introduced in Hausdorff locally convex topological vector spaces. Secondly, a characterization of this subdifferential by scalarization and the generalized $\varepsilon-$ Moreau-Rockafellar type theorem for set-valued maps are established. Finally, the necessary optimality condition of the constraint set-valued optimization problem for $\varepsilon-$ … Read more

On the Step Size of Symmetric Alternating Directions Method of Multipliers

The alternating direction method of multipliers (ADMM) is an application of the Douglas-Rachford splitting method; and the symmetric version of ADMM which updates the Lagrange multiplier twice at each iteration is an application of the Peaceman-Rachford splitting method. Sometimes the symmetric ADMM works empirically; but theoretically its convergence is not guaranteed. It was recently found … Read more

A MAX-CUT formulation of 0/1 programs

We consider the linear or quadratic 0/1 program \[P:\quad f^*=\min\{ c^Tx+x^TFx : \:A\,x =\b;\:x\in\{0,1\}^n\},\] for some vectors $c\in R^n$, $b\in Z^m$, some matrix $A\in Z^{m\times n}$ and some real symmetric matrix $F\in R^{n\times n}$. We show that $P$ can be formulated as a MAX-CUT problem whose quadratic form criterion is explicit from the data of … Read more

A Constraint-reduced Algorithm for Semidefinite Optimization Problems using HKM and AHO directions

We develop a new constraint-reduced infeasible predictor-corrector interior point method for semidefinite programming, and we prove that it has polynomial global convergence and superlinear local convergence. While the new algorithm uses HKM direction in predictor step, it adopts AHO direction in corrector step to obtain faster approach to the central path. In contrast to the … Read more

A New Method for Optimizing a Linear Function over the Efficient Set of a Multiobjective Integer Program

We present a new algorithm for optimizing a linear function over the set of efficient solutions of a multiobjective integer program MOIP. The algorithm’s success relies on the efficiency of a new algorithm for enumerating the nondominated points of a MOIP, which is the result of employing a novel criterion space decomposition scheme which (1) … Read more

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

We present global convergence rates for a line-search method which is based on random first-order models and directions whose quality is ensured only with certain probability. We show that in terms of the order of the accuracy, the evaluation complexity of such a method is the same as its counterparts that use deterministic accurate models; … Read more

A Constraint-Reduced Algorithm for Semidefinite Optimization Problems with Superlinear Convergence

Constraint reduction is an essential method because the computational cost of the interior point methods can be effectively saved. Park and O’Leary proposed a constraint-reduced predictor-corrector algorithm for semidefinite programming with polynomial global convergence, but they did not show its superlinear convergence. We first develop a constraint-reduced algorithm for semidefinite programming having both polynomial global … Read more

STABILITY OF A REGULARIZED NEWTON METHOD WITH TWO POTENTIALS

In a Hilbert space setting, we study the stability properties of the regularized continuous Newton method with two potentials, which aims at solving inclusions governed by structured monotone operators. The Levenberg-Marquardt regularization term acts in an open loop way. As a byproduct of our study, we can take the regularization coefficient of bounded variation. These … Read more

A Binarisation Heuristic for Non-Convex Quadratic Programming with Box Constraints

Non-convex quadratic programming with box constraints is a fundamental problem in the global optimization literature, being one of the simplest NP-hard nonlinear programs. We present a new heuristic for this problem, which enables one to obtain solutions of excellent quality in reasonable computing times. The heuristic consists of four phases: binarisation, convexification, branch-and-bound, and local … Read more

Distributionally Robust Optimization with Matrix Moment Constraints: Lagrange Duality and Cutting Plane Methods

A key step in solving minimax distributionally robust optimization (DRO) problems is to reformulate the inner maximization w.r.t. probability measure as a semiinfinite programming problem through Lagrange dual. Slater type conditions have been widely used for zero dual gap when the ambiguity set is defined through moments. In this paper, we investigate effective ways for … Read more