Simplified Versions of the Conditional Gradient Method

We suggest simple modifications of the conditional gradient method for smooth optimization problems, which maintain the basic convergence properties, but reduce the implementation cost of each iteration essentially. Namely, we propose the step-size procedure without any line-search, and inexact solution of the direction finding subproblem. Preliminary results of computational tests confirm efficiency of the proposed … Read more

Tight-and-cheap conic relaxation for the AC optimal power flow problem

The classical alternating current optimal power flow problem is highly nonconvex and generally hard to solve. Convex relaxations, in particular semidefinite, second-order cone, convex quadratic, and linear relaxations, have recently attracted significant interest. The semidefinite relaxation is the strongest among them and is exact for many cases. However, the computational efficiency for solving large-scale semidefinite … Read more

The Maximum Clique Interdiction Problem

Given a graph G and an interdiction budget k, the Maximum Clique Interdiction Problem asks to find a subset of at most k vertices to remove from G so that the size of the maximum clique in the remaining graph is minimized. This problem has applications in many areas, such as crime detection, prevention of … Read more

Iterative weighted thresholding method for sparse solution of underdetermined linear equations

Recently, iterative reweighted methods have attracted much interest in compressed sensing, since they perform better than unweighted ones in most cases. Currently, weights are chosen heuristically in existing iterative reweighted methods, and nding an optimal weight is an open problem since we do not know the exact support set beforehand. In this paper, we present … Read more

A forward-backward penalty scheme with inertial effects for montone inclusions. Applications to convex bilevel programming

We investigate forward-backward splitting algorithm of penalty type with inertial effects for finding the zeros of the sum of a maximally monotone operator and a cocoercive one and the convex normal cone to the set of zeroes of an another cocoercive operator. Weak ergodic convergence is obtained for the iterates, provided that a condition express … Read more

Inexact cuts in Deterministic and Stochastic Dual Dynamic Programming applied to linear optimization problems

We introduce an extension of Dual Dynamic Programming (DDP) to solve linear dynamic programming equations. We call this extension IDDP-LP which applies to situations where some or all primal and dual subproblems to be solved along the iterations of the method are solved with a bounded error (inexactly). We provide convergence theorems both in the … Read more

Convergence Analysis of Sample Average Approximation of Two-stage Stochastic Generalized Equations

A solution of two-stage stochastic generalized equations is a pair: a first stage solution which is independent of realization of the random data and a second stage solution which is a function of random variables. This paper studies convergence of the sample average approximation of two-stage stochastic nonlinear generalized equations. In particular an exponential rate … Read more

Bounding and Counting Linear Regions of Deep Neural Networks

We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions. In particular, we study the number of linear regions, i.e. pieces, that a PWL function represented by a DNN can attain, both theoretically and empirically. We present (i) tighter upper and lower bounds for the maximum number of linear regions … Read more

Large neighbourhood Benders’ search

A general enhancement of the Benders’ decomposition algorithm can be achieved through the improved use of large neighbourhood search heuristics within mixed-integer programming solvers. While mixed-integer programming solvers are endowed with an array of large neighbourhood search heuristics, their use is typically limited to finding solutions to the Benders’ decomposition master problem, which may be … Read more

Convergence rates of proximal gradient methods via the convex conjugate

We give a novel proof of the $O(1/k)$ and $O(1/k^2)$ convergence rates of the proximal gradient and accelerated proximal gradient methods for composite convex minimization. The crux of the new proof is an upper bound constructed via the convex conjugate of the objective function. CitationTechnical Report, Carnegie Mellon University, January 2018.ArticleDownload View PDF