Iteration-complexity of a Rockafellar’s proximal method of multipliers for convex programming based on second-order approximations

This paper studies the iteration-complexity of a new primal-dual algorithm based on Rockafellar’s proximal method of multipliers (PMM) for solving smooth convex programming problems with inequality constraints. In each step, either a step of Rockafellar’s PMM for a second-order model of the problem is computed or a relaxed extragradient step is performed. The resulting algorithm … Read more

A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints is presented. The method uses a gradient projection algorithm enhanced with subspace acceleration to solve the bound-constrained dual optimization problem. Such gradient projection methods are well-known, but are typically employed to solve the primal problem when only … Read more

A Reduced-Space Algorithm for Minimizing $\ell_1hBcRegularized Convex Functions

We present a new method for minimizing the sum of a differentiable convex function and an $\ell_1$-norm regularizer. The main features of the new method include: $(i)$ an evolving set of indices corresponding to variables that are predicted to be nonzero at a solution (i.e., the support); $(ii)$ a reduced-space subproblem defined in terms of … Read more

Generation of Feasible Integer Solutions on a Massively Parallel Computer

We present an approach to parallelize generation of feasible solutions of mixed integer linear programs in distributed memory high performance computing environments. The approach combines a parallel framework with feasibility pump (FP) as the rounding heuristic. The proposed approach runs multiple FP instances with different starting so- lutions concurrently, while allowing them to share information. … Read more

Risk Averse Shortest Path Interdiction

We consider a Stackelberg game in a network, where a leader minimizes the cost of interdicting arcs and a follower seeks the shortest distance between given origin and destination nodes under uncertain arc traveling cost. In particular, we consider a risk-averse leader, who aims to keep high probability that the follower’s traveling distance is longer … Read more

A dynamic programming approach for a class of robust optimization problems

Common approaches to solve a robust optimization problem decompose the problem into a master problem (MP) and adversarial separation problems (APs). MP contains the original robust constraints, however written only for finite numbers of scenarios. Additional scenarios are generated on the fly by solving the APs. We consider in this work the budgeted uncertainty polytope … Read more

Numerical Solution of Linear-Quadratic Optimal Control Problems for Switching System

In this paper we obtained an approach to the optimal switching control problem with unknown switching points which it is described in reference [1, 2]. In reference [1], the authors studied the Decomposition of Linear-Quadratic Optimal Control Problems for Two-Steps Systems. In [1], the authors assumed the switching point t1 is xed in the interval … Read more

Gradient Descent only Converges to Minimizers

We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory. Article Download View Gradient Descent only Converges to Minimizers

Facial reduction heuristics and the motivational example of mixed-integer conic optimization

Facial reduction heuristics are developed in the interest of added performance and reliability in methods for mixed-integer conic optimization. Specifically, the process of branch-and-bound is shown to spawn subproblems for which the conic relaxations are difficult to solve, and the objective bounds of linear relaxations are arbitrarily weak. While facial reduction algorithms already exist to … Read more

On the convergence of stochastic bi-level gradient methods

We analyze the convergence of stochastic gradient methods for bi-level optimization problems. We address two specific cases: first when the outer objective function can be expressed as a finite sum of independent terms, and next when both the outer and inner objective functions can be expressed as finite sums of independent terms. We assume Lipschitz … Read more