Regularized monotonic regression

Monotonic (isotonic) Regression (MR) is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the MR formulated … Read more

On geometrical properties of preconditioners in IPMs for classes of block-angular problems

One of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraints structure. … Read more

Local Nonglobal Minima for Solving Large Scale Extended Trust Region Subproblems

We study large scale extended trust region subproblems (eTRS) i.e., the minimization of a general quadratic function subject to a norm constraint, known as the trust region subproblem (TRS) but with an additional linear inequality constraint. It is well known that strong duality holds for the TRS and that there are efficient algorithms for solving … Read more

ADMM for the SDP relaxation of the QAP

The semidefinite programming (SDP) relaxation has proven to be extremely strong for many hard discrete optimization problems. This is in particular true for the quadratic assignment problem (QAP), arguably one of the hardest NP-hard discrete optimization problems. There are several difficulties that arise in efficiently solving the SDP relaxation, e.g., increased dimension; inefficiency of the … Read more

On the Convergence of Multi-Block Alternating Direction Method of Multipliers and Block Coordinate Descent Method

The paper answers several open questions of the alternating direction method of multipliers (ADMM) and the block coordinate descent (BCD) method that are now wildly used to solve large scale convex optimization problems in many fields. For ADMM, it is still lack of theoretical understanding of the algorithm when the objective function is not separable … Read more

A Two-Level Approach to Large Mixed-Integer Programs with Application to Cogeneration in Energy-Efficient Buildings

We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model (coarsened with respect to variables) and a coarse model (coarsened with respect to both variables and constraints). We coarsen binary variables by selecting a small number … Read more

A Filter SQP Method: Local Convergence and Numerical Results

The work by Gould, Loh, and Robinson [“A filter method with unified step computation for nonlinear optimization”, SIAM J. Optim., 24 (2014), pp. 175–209] established global convergence of a new filter line search method for finding local first-order solutions to nonlinear and nonconvex constrained optimization problems. A key contribution of that work was that the … Read more

Globally Convergent Primal-Dual Active-Set Methods with Inexact Subproblem Solves

We propose primal-dual active-set (PDAS) methods for solving large-scale instances of an important class of convex quadratic optimization problems (QPs). The iterates of the algorithms are partitions of the index set of variables, where corresponding to each partition there exist unique primal-dual variables that can be obtained by solving a (reduced) linear system. Algorithms of … Read more

An Inertia-Free Filter Line-Search Algorithm for Large-Scale Nonlinear Programming

We present a filter line-search algorithm that does not require inertia information about the linear system to ensure global convergence. The proposed approach performs curvature tests along the search step to ensure descent. This feature permits more modularity in the linear algebra, enabling the use of a wider range of iterative and decomposition strategies. We … Read more

Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience

In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp.201–245.]. … Read more