Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem

We consider barrier problems associated with two and multistage stochastic convex optimization problems. We show that the barrier recourse functions at any stage form a self-concordant family with respect to the barrier parameter. We also show that the complexity value of the first stage problem increases additively with the number of stages and scenarios. We … Read more

A Coordinate Gradient Descent Method for Linearly Constrained Smooth Optimization and Support Vector Machines Training

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. … Read more

Nonlinear programming without a penalty function or a filter

A new method is introduced for solving equality constrained nonlinear optimization problems. This method does not use a penalty function, nor a barrier or a filter, and yet can be proved to be globally convergent to first-order stationary points. It uses different trust-regions to cope with the nonlinearities of the objective function and the constraints, … Read more

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

We present a null-space primal-dual interior-point algorithm for solving nonlinear optimization problems with general inequality and equality constraints. The algorithm approximately solves a sequence of equality constrained barrier subproblems by computing a predictor step and a null space step in every iteration. The $\ell_2$ penalty function is taken as the merit function. Under very mild … Read more

Approximate Primal Solutions and Rate Analysis in Dual Subgradient Methods

We study primal solutions obtained as a by-product of subgradient methods when solving the Lagrangian dual of a primal convex constrained optimization problem (possibly nonsmooth). The existing literature on the use of subgradient methods for generating primal optimal solutions is limited to the methods producing such solutions only asymptotically (i.e., in the limit as the … Read more

Finding a point in the relative interior of a polyhedron

A new initialization or `Phase I’ strategy for feasible interior point methods for linear programming is proposed that computes a point on the primal-dual central path associated with the linear program. Provided there exist primal-dual strictly feasible points — an all-pervasive assumption in interior point method theory that implies the existence of the central path … Read more

Global convergence of slanting filter methods for nonlinear programming

In this paper we present a general algorithm for nonlinear programming which uses a slanting filter criterion for accepting the new iterates. Independently of how these iterates are computed, we prove that all accumulation points of the sequence generated by the algorithm are feasible. Computing the new iterates by the inexact restoration method, we prove … Read more

SPECTRAL STOCHASTIC FINITE-ELEMENT METHODS FOR PARAMETRIC CONSTRAINED OPTIMIZATION PROBLEMS

We present a method to approximate the solution mapping of parametric constrained optimization problems. The approximation, which is of the spectral stochastic finite element type, is represented as a linear combination of orthogonal polynomials. Its coefficients are determined by solving an appropriate finite-dimensional constrained optimization problem. We show that, under certain conditions, the latter problem … Read more

On Second-Order Optimality Conditions for Nonlinear Programming

Necessary Optimality Conditions for Nonlinear Programming are discussed in the present research. A new Second-Order condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/~egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new condition. Article Download View … Read more

An Inexact SQP Method for Equality Constrained Optimization

We present an algorithm for large-scale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for large-scale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved … Read more