Integral Global Optimality Conditions and an Algorithm for Multiobjective Problems

In this work, we propose integral global optimality conditions for multiobjective problems not necessarily differentiable. The integral characterization, already known for single objective problems, are extended to multiobjective problems by weighted sum and Chebyshev weighted scalarizations. Using this last scalarization, we propose an algorithm for obtaining an approximation of the weak Pareto front whose effectiveness … Read more

Non-anticipative risk-averse analysis with effective scenarios applied to long-term hydrothermal scheduling

In this paper, we deal with long-term operation planning problems of hydrothermal power systems by considering scenario analysis and risk aversion. This is a stochastic sequential decision problem whose solution must be non-anticipative, in the sense that a decision at a stage cannot use a perfect knowledge of the future. We propose strategies to reduce … Read more

Global convergence of a derivative-free inexact restoration filter algorithm for nonlinear programming

In this work we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed in two independent phases: … Read more

A globally convergent trust-region algorithm for unconstrained derivative-free optimization

In this work we explicit a derivative-free trust-region algorithm for unconstrained optimization based on the paper (Computational Optimization and Applications 53: 527-555, 2012) proposed by Powell. The objective function is approximated by quadratic models obtained by polynomial interpolation. The number of points of the interpolation set is fixed. In each iteration only one interpolation point … Read more

A trust-region derivative-free algorithm for constrained optimization

We propose a trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available. In each iteration, the objective function is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region. Since the constraints are handled … Read more

Algebraic rules for quadratic regularization of Newton’s method

In this work we propose a class of quasi-Newton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton’s method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the … Read more

Global convergence of trust-region algorithms for constrained minimization without derivatives

In this work we propose a trust-region algorithm for the problem of minimizing a function within a convex closed domain. We assume that the objective function is differentiable but no derivatives are available. The algorithm has a very simple structure and allows a great deal of freedom in the choice of the models. Under reasonable … Read more

AN OPTIMAL ALGORITHM FOR CONSTRAINED DIFFERENTIABLE CONVEX OPTIMIZATION

We describe three algorithms for solving differentiable convex optimization problems constrained to simple sets in $ \R^n $, i.e., sets on which it is easy to project an arbitrary point. The first two algorithms are optimal in the sense that they achieve an absolute precision of $ \varepsilon $ in relation to the optimal value … Read more

Optimal steepest descent algorithms for unconstrained convex problems: fine tuning Nesterov’s method

We modify the first order algorithm for convex programming proposed by Nesterov. The resulting algorithm keeps the optimal complexity obtained by Nesterov with no need of a known Lipschitz constant for the gradient, and performs better in practically all examples in a set of test problems. CitationTechnical Report, Federal University of Santa Catarina, 2008.ArticleDownload View … Read more

Global convergence of slanting filter methods for nonlinear programming

In this paper we present a general algorithm for nonlinear programming which uses a slanting filter criterion for accepting the new iterates. Independently of how these iterates are computed, we prove that all accumulation points of the sequence generated by the algorithm are feasible. Computing the new iterates by the inexact restoration method, we prove … Read more