Superlinearly convergent smoothing Newton continuation algorithms for variational inequalities over definable sets

In this paper, we use the concept of barrier-based smoothing approximations introduced by Chua and Li to extend various smoothing Newton continuation algorithms to variational inequalities over general closed convex sets X. We prove that when the underlying barrier has a gradient map that is definable in some o-minimal structure, the iterates generated converge superlinearly … Read more

On QPCCs, QCQPs and Completely Positive Programs

This paper studies several classes of nonconvex optimization problems defined over convex cones, establishing connections between them and demonstrating that they can be equivalently formulated as convex completely positive programs. The problems being studied include: a quadratically constrained quadratic program (QCQP), a quadratic program with complementarity constraints (QPCC), and rank constrained semidefinite programs. Our results … Read more

A Relaxed-Projection Splitting Algorithm for Variational Inequalities in Hilbert Spaces

We introduce a relaxed-projection splitting algorithm for solving variational inequalities in Hilbert spaces for the sum of nonsmooth maximal monotone operators, where the feasible set is defined by a nonlinear and nonsmooth continuous convex function inequality. In our scheme, the orthogonal projections onto the feasible set are replaced by projections onto separating hyperplanes. Furthermore, each … Read more

On the irreducibility, Lyapunov rank, and automorphisms of speical Bishop-Phelps cones

Motivated by optimization considerations, we consider special Bishop-Phelps cones in R^n which are of the form {(t,x): t \geq ||x||} for some norm on R^(n-1). We show that for n bigger than 2, such cones are always irreducible. De fining the Lyapunov rank of a proper cone K as the dimension of the Lie algebra of … Read more

Complementarity Formulations of l0-norm Optimization Problems

In a number of application areas, it is desirable to obtain sparse solutions. Minimizing the number of nonzeroes of the solution (its l0-norm) is a difficult nonconvex optimization problem, and is often approximated by the convex problem of minimizing the l1-norm. In contrast, we consider exact formulations as mathematical programs with complementarity constraints and their … Read more

Local Convergence of the Method of Multipliers for Variational and Optimization Problems under the Sole Noncriticality Assumption

We present local convergence analysis of the method of multipliers for equality-constrained variational problems (in the special case of optimization, also called the augmented Lagrangian method) under the sole assumption that the dual starting point is close to a noncritical Lagrange multiplier (which is weaker than second-order sufficiency). Local superlinear convergence is established under the … Read more

A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities

In this paper we present a primal interior-point hybrid proximal extragradient (HPE) method for solving a monotone variational inequality over a closed convex set endowed with a self-concordant barrier and whose underlying map has Lipschitz continuous derivative. In contrast to the method of “R. D. C. Monteiro and B. F. Svaiter, Iteration-Complexity of a Newton … Read more

A game-theoretic approach to computation offloading in mobile cloud computing

We consider a three-tier architecture for mobile and pervasive computing scenarios, consisting of a local tier of mobile nodes, a middle tier (cloudlets) of nearby computing nodes, typically located at the mobile nodes access points but characterized by a limited amount of resources, and a remote tier of distant cloud servers, which have practically infinite … Read more

Inverse Parametric Optimization with an Application to Hybrid System Control

We present a number of results on inverse parametric optimization and its application to hybrid system control. We show that any function that can be written as the difference of two convex functions can also be written as a linear mapping of the solution to a convex parametric optimization problem. We exploit these results in … Read more

A direct splitting method for nonsmooth variational inequalities

We propose a direct splitting method for solving nonsmooth variational inequality problems in Hilbert spaces. The weak convergence is established, when the operator is the sum of two point-to-set and monotone operators. The proposed method is a natural extension of the incremental subgradient method for nondifferentiable optimization, which explores strongly the structure of the operator … Read more