Non-anticipative risk-averse analysis with effective scenarios applied to long-term hydrothermal scheduling

In this paper, we deal with long-term operation planning problems of hydrothermal power systems by considering scenario analysis and risk aversion. This is a stochastic sequential decision problem whose solution must be non-anticipative, in the sense that a decision at a stage cannot use a perfect knowledge of the future. We propose strategies to reduce … Read more

On the steepest descent algorithm for quadratic functions

The steepest descent algorithm with exact line searches (Cauchy algorithm) is inefficient, generating oscillating step lengths and a sequence of points converging to the span of the eigenvectors associated with the extreme eigenvalues. The performance becomes very good if a short step is taken at every (say) 10 iterations. We show a new method for … Read more

On the worst case performance of the steepest descent algorithm for quadratic functions

\begin{abstract} The existing choices for the step lengths used by the classical steepest descent algorithm for minimizing a convex quadratic function require in the worst case $ \Or(C\log(1/\eps)) $ iterations to achieve a precision $ \eps $, where $ C $ is the Hessian condition number. We show how to construct a sequence of step … Read more

On the iterate convergence of descent methods for convex optimization

We study the iterate convergence of strong descent algorithms applied to convex functions. We assume that the function satisfies a very simple growth condition around its minimizers, and then show that the trajectory described by the iterates generated by any such method has finite length, which proves that the sequence of iterates converge. Citation Federal … Read more

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

We consider the minimization of a convex function on a compact polyhedron defined by linear equality constraints and nonnegative variables. We define the Levenberg-Marquardt (L-M) and central trajectories starting at the analytic center and using the same parameter, and show that they satisfy a primal-dual relationship, being close to each other for large values of … Read more

AN OPTIMAL ALGORITHM FOR CONSTRAINED DIFFERENTIABLE CONVEX OPTIMIZATION

We describe three algorithms for solving differentiable convex optimization problems constrained to simple sets in $ \R^n $, i.e., sets on which it is easy to project an arbitrary point. The first two algorithms are optimal in the sense that they achieve an absolute precision of $ \varepsilon $ in relation to the optimal value … Read more

Optimal steepest descent algorithms for unconstrained convex problems: fine tuning Nesterov’s method

We modify the first order algorithm for convex programming proposed by Nesterov. The resulting algorithm keeps the optimal complexity obtained by Nesterov with no need of a known Lipschitz constant for the gradient, and performs better in practically all examples in a set of test problems. Citation Technical Report, Federal University of Santa Catarina, 2008. … Read more

ON THE LIMITING PROPERTIES OF THE AFFINE-SCALING DIRECTIONS

We study the limiting properties of the affine-scaling directions for linear programming problems. The worst-case angle between the affine-scaling directions and the objective function vector provides an interesting measure that has been very helpful in convergence analyses and in understanding the behaviour of various interior-point algorithms. We establish new relations between this measure and some … Read more

A primal affine-scaling algorithm for constrained convex programs

The affine-scaling algorithm was initially developed for linear programming problems. Its extension to problems with a nonlinear objective performs at each iteration a scaling followed by a line search along the steepest descent direction. In this paper we prove that any accumulation point generated by this algorithm when applied to a convex function is an … Read more

A globally convergent filter method for nonlinear programming

In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a restoration phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally … Read more