On the behavior of subgradient projections methods for convex feasibility problems in Euclidean spaces

We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in … Read more

Parallel Space Decomposition of the Mesh Adaptive Direct Search algorithm

This paper describes a parallel space decomposition PSD technique for the mesh adaptive direct search MADS algorithm. MADS extends a generalized pattern search for constrained nonsmooth optimization problems. The objective of the present work is to obtain good solutions to larger problems than the ones typically solved by MADS. The new method PSD-MADS is an … Read more

OrthoMADS: A deterministic MADS instance with orthogonal directions

he purpose of this paper is to introduce a new way of choosing directions for the mesh adaptive direct search (Mads) class of algorithms. The advantages of this new OrthoMads instantiation of Mads are that the polling directions are chosen deterministically, ensuring that the results of a given run are repeatable, and that they are … Read more

First-order algorithm with (ln(1/\epsilon))$ convergence for $\epsilonhBcequilibrium in two-person zero-sum games

We propose an iterated version of Nesterov’s first-order smoothing method for the two-person zero-sum game equilibrium problem $$\min_{x\in Q_1} \max_{y\in Q_2} \ip{x}{Ay} = \max_{y\in Q_2} \min_{x\in Q_1} \ip{x}{Ay}.$$ This formulation applies to matrix games as well as sequential games. Our new algorithmic scheme computes an $\epsilon$-equilibrium to this min-max problem in $\Oh(\kappa(A) \ln(1/\epsilon))$ first-order iterations, … Read more

Smoothing techniques for computing Nash equilibria of sequential games

We develop first-order smoothing techniques for saddle-point problems that arise in the Nash equilibria computation of sequential games. The crux of our work is a construction of suitable prox-functions for a certain class of polytopes that encode the sequential nature of the games. An implementation based on our smoothing techniques computes approximate Nash equilibria for … Read more

A Newton-CG Augmented Lagrangian Method for Semidefinite Programming

We consider a Newton-CG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we show that the positive … Read more

Duality of ellipsoidal approximations via semi-infinite programming

In this work, we develop duality of the minimum volume circumscribed ellipsoid and the maximum volume inscribed ellipsoid problems. We present a unified treatment of both problems using convex semi–infinite programming. We establish the known duality relationship between the minimum volume circumscribed ellipsoid problem and the optimal experimental design problem in statistics. The duality results … Read more

An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem

Given ${\cal A} := \{a^1,\ldots,a^m\} \subset \R^n$ with corresponding positive weights ${\cal W} := \{\omega_1,\ldots,\omega_m\}$, the weighted Euclidean one-center problem, which is a generalization of the minimum enclosing ball problem, involves the computation of a point $c_{\cal A} \in \R^n$ that minimizes the maximum weighted Euclidean distance from $c_{\cal A}$ to each point in ${\cal … Read more

Kernel Support Vector Regression with imprecise output

We consider a regression problem where uncertainty affects to the dependent variable of the elements of the database. A model based on the standard epsilon-Support Vector Regression approach is given, where two hyperplanes need to be constructed to predict the interval-valued dependent variable. By using the Hausdorff distance to measure the error between predicted and … Read more

Probing the Pareto frontier for basis pursuit solutions

The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously … Read more