Minimal Residual Methods for Complex Symmetric, Skew Symmetric, and Skew Hermitian Systems

While there is no lack of efficient Krylov subspace solvers for Hermitian systems, there are few for complex symmetric, skew symmetric, or skew Hermitian systems, which are increasingly important in modern applications including quantum dynamics, electromagnetics, and power systems. For a large consistent complex symmetric system, one may apply a non-Hermitian Krylov subspace method disregarding … Read more

An interior proximal point method with phi-divergence for Equilibrium Problems

In this paper, we consider the problem of general equilibrium in a  finite-dimensional space on a closed convex set. For solving this problem, we developed an interior proximal point algorithm with phi-divergence. Under reasonable assumptions, we prove that the sequence generated by the algorithm converges to a solution of the Equilibrium Problem, when the regularization … Read more

Optimal Primal-Dual Methods for a Class of Saddle Point Problems

We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPP). The basic idea of this algorithm is to incorporate a multi-step acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence … Read more

Faster, but Weaker, Relaxations for Quadratically Constrained Quadratic Programs

We introduce a new relaxation framework for nonconvex quadratically constrained quadratic programs (QCQPs). In contrast to existing relaxations based on semidefinite programming (SDP), our relaxations incorporate features of both SDP and second order cone programming (SOCP) and, as a result, solve more quickly than SDP. A downside is that the calculated bounds are weaker than … Read more

Tail bounds for stochastic approximation

Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each … Read more

On the use of semi-closed sets and functions in convex analysis

The main aim of this short note is to show that the sub\-differentiability and duality results established by Laghdir (2005), Laghdir and Benabbou (2007), and Alimohammady \emph{et al.}\ (2011), stated in Fréchet spaces, are consequences of the corresponding known results using Moreau–Rockafellar type conditions. ArticleDownload View PDF

A splitting minimization method on geodesic spaces

We present in this paper the alternating minimization method on CAT(0) spaces for solving unconstraints convex optimization problems. Under the assumption that the function being minimize is convex, we prove that the sequence generated by our algorithm converges to a minimize point. The results presented in this paper are the first ones of this type … Read more

Projection: A Unified Approach to Semi-Infinite Linear Programs and Duality in Convex Programming

Fourier-Motzkin elimination is a projection algorithm for solving finite linear programs. We extend Fourier-Motzkin elimination to semi-infinite linear programs which are linear programs with finitely many variables and infinitely many constraints. Applying projection leads to new characterizations of important properties for primal-dual pairs of semi-infinite programs such as zero duality gap, feasibility, boundedness, and solvability. … Read more

A Perturbed Sums of Squares Theorem for Polynomial Optimization and its Applications

We consider a property of positive polynomials on a compact set with a small perturbation. When applied to a Polynomial Optimization Problem (POP), the property implies that the optimal value of the corresponding SemiDefinite Programming (SDP) relaxation with sufficiently large relaxation order is bounded from below by $(f^¥ast – ¥epsilon)$ and from above by $f^¥ast … Read more

Gradient methods for convex minimization: better rates under weaker conditions

The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their well-known complexities can be achieved under conditions weaker than the commonly assumed ones. We relax the common gradient Lipschitz-continuity condition and strong convexity condition to ones that hold only over … Read more