Sensitivity of trust-region algorithms on their parameters

In this paper, we examine the sensitivity of trust-region algorithms on the parameters related to the step acceptance and update of the trust region. We show, in the context of unconstrained programming, that the numerical efficiency of these algorithms can easily be improved by choosing appropriate parameters. Recommanded ranges of values for these parameters are … Read more

Leader-Follower Equilibria for Electric Power and NO_x Allowances Markets

This paper investigates the ability of the largest producer in an electricity market to manipulate both the electricity and emission allowances markets to its advantage. A Stackelberg game to analyze this situation is constructed in which the largest firm plays the role of the leader, while the medium-sized firms are treated as Cournot followers with … Read more

Interior point methods for large-scale linear programming

We discuss interior point methods for large-scale linear programming, with an emphasis on methods that are useful for problems arising in telecommunications. We give the basic framework of a primal-dual interior point method, and consider the numerical issues involved in calculating the search direction in each iteration, including the use of factorization methods and/or preconditioned … Read more

Efficiency and Fairness of System-Optimal Routing with User Constraints

We study the route-guidance system proposed by Jahn, Möhring, Schulz and Stier-Moses (2004) from a theoretical perspective. This approach computes a traffic pattern that minimizes the total travel time subject to user constraints, which ensure that routes suggested to users are not much longer than shortest paths. We show that when distances are measured with … Read more

Recursive Trust-Region Methods for Multilevel Nonlinear Optimization (Part I): Global Convergence and Complexity

A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a mean of speeding up the computation of the step. This use is recursive, leading to … Read more

Convergence Analysis of the DIRECT Algorithm

The DIRECT algorithm is a deterministic sampling method for bound constrained Lipschitz continuous optimization. We prove a subsequential convergence result for the DIRECT algorithm that quantifies some of the convergence observations in the literature. Our results apply to several variations on the original method, including one that will handle general constraints. We use techniques from … Read more

A NEW SELF-CONCORDANT BARRIER FOR THE HYPERCUBE

In this paper we introduce a new barrier function $\sum\limits_{i=1}^n(2x_i-1)[\ln{x_i}-\ln(1-x_i)]$ to solve the following optimization problem: $\min\,\, f(x)$ subject to: $Ax=b;\;\;0\leq x\leq e$. We show that this function is a $(3/2)n$-self-concordant barrier on the hypercube $[0,1]^n$. We prove that the central path is well defined and that under an additional assumption on the objective function, … Read more

Interior Point Trajectories and a Homogeneous Model for Nonlinear Complementarity Problems over Symmetric Cones

We study the continuous trajectories for solving monotone nonlinear mixed complementarity problems over symmetric cones. While the analysis in Faybusovich (1997) depends on the optimization theory of convex log-barrier functions, our approach is based on the paper of Monteiro and Pang (1998), where a vast set of conclusions concerning continuous trajectories is shown for monotone … Read more

Second-order Cone Programming Methods for Total Variation-based Image Restoration

In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of L. Rudin, S. Osher and E. Fatemi (ROF). Our approach formulates TV minimization as a second-order cone program which is then solved by interior-point algorithms that are efficient both in practice (using nested dissection and domain … Read more

Sensitivity analysis in linear optimization: Invariant support set intervals

Sensitivity analysis is one of the most nteresting and preoccupying areas in optimization. Many attempts are made to investigate the problem’s behavior when the input data changes. Usually variation occurs in the right hand side of the constraints and/or the objective function coefficients. Degeneracy of optimal solutions causes considerable difficulties in sensitivity analysis. In this … Read more