On String-Averaging for Sparse Problems and On the Split Common Fixed Point Problem

We review the common fixed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. We present our recent definition of sparseness of a family of operators and discuss a string-averaging algorithmic scheme that favorably handles the common fixed points problem … Read more

Seminorm-induced oblique projections for sparse nonlinear convex feasibility problems

Simultaneous subgradient projection algorithms for the convex feasibility problem use subgradient calculations and converge sometimes even in the inconsistent case. We devise an algorithm that uses seminorm-induced oblique projections onto super half-spaces of the convex sets, which is advantageous when the subgradient-Jacobian is a sparse matrix at many iteration points of the algorithm. Using generalized … Read more

An Augmented Lagrangian Approach for Sparse Principal Component Analysis

Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To … Read more

Sample Average Approximation for Stochastic Dominance Constrained Programs

In this paper we study optimization problems with second-order stochastic dominance constraints. This class of problems has been receiving increasing attention in the literature as it allows for the modeling of optimization problems where a risk-averse decision maker wants to ensure that the solution produced by the model dominates certain benchmarks. Here we deal with … Read more

A Combined Class of Self-Scaling and Modified Quasi-Newton Methods

Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain `better’ curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method has global and superlinear convergence for convex functions. Numerical experiments with this class, using the … Read more

On the global convergence of interior-point nonlinear programming algorithms

Carathéodory’s lemma states that if we have a linear combination of vectors in R^n, we can rewrite this combination using a linearly independent subset. This result has been successfully applied in nonlinear optimization in many contexts. In this work we present a new version of this celebrated theorem, in which we obtained new bounds for … Read more

On sequential optimality conditions for smooth constrained optimization

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counter-examples will be shown. Algorithmic consequences will be discussed. ArticleDownload View PDF

Real-Time Optimization as a Generalized Equation

We establish results for the problem of tracking a time-dependent manifold arising in online nonlinear programming by casting this as a generalized equation. We demonstrate that if points along a solution manifold are consistently strongly regular, it is possible to track the manifold approximately by solving a linear complementarity problem (LCP) at each time step. … Read more

Starting-Point Strategies for an Infeasible Potential Reduction Method

We present two strategies for choosing a “hot” starting-point in the context of an infeasible Potential Reduction (PR) method for convex Quadratic Programming. The basic idea of both strategies is to select a preliminary point and to suitably scale it in order to obtain a starting point such that its nonnegative entries are sufficiently bounded … Read more

An inexact parallel splitting augmented Lagrangian method for large system of linear equations

Parallel iterative methods are power tool for solving large system of linear equations (LQs). The existing parallel computing research results are all most concentred to sparse system or others particular structure, and all most based on parallel implementing the classical relaxation methods such as Gauss-Seidel, SOR, and AOR methods e±ciently on multiprcessor systems. In this … Read more