Relaxations and Duality for Multiobjective Integer Programming

Multiobjective integer programs (MOIPs) simultaneously optimize multiple objective func- tions over a set of linear constraints and integer variables. In this paper, we present continuous, convex hull and Lagrangian relaxations for MOIPs and examine the relationship among them. The convex hull relaxation is tight at supported solutions, i.e., those that can be derived via a … Read more

Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

\(\) The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization — one of the fundamental optimization problems in statistical … Read more

On the Relation Between Affinely Adjustable Robust Linear Complementarity and Mixed-Integer Linear Feasibility Problems

We consider adjustable robust linear complementarity problems and extend the results of Biefel et al.~(2022) towards convex and compact uncertainty sets. Moreover, for the case of polyhedral uncertainty sets, we prove that computing an adjustable robust solution of a given linear complementarity problem is equivalent to solving a properly chosen mixed-integer linear feasibility problem. Article … Read more

Finite convergence of the inexact proximal gradient method to sharp minima

Attractive properties of subgradient methods, such as robust stability and linear convergence, has been emphasized when they are used to solve nonsmooth optimization problems with sharp minima [12, 13]. In this letter we extend the robustness results to the composite convex models and show that the basic proximal gradient algorithm under the presence of a … Read more

On the first order optimization methods in Deep Image Prior

Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior … Read more

Software for data-based stochastic programming using bootstrap estimation

In this paper we describe software for stochastic programming that uses only sampled data to obtain both a consistent sample-average solution and a consistent estimate of confidence intervals for the optimality gap using bootstrap and bagging. The underlying distribution whence the samples come is not required. Article Download View Software for data-based stochastic programming using … Read more

Polynomial worst-case iteration complexity of quasi-Newton primal-dual interior point algorithms for linear programming

Quasi-Newton methods are well known techniques for large-scale numerical optimization. They use an approximation of the Hessian in optimization problems or the Jacobian in system of nonlinear equations. In the Interior Point context, quasi-Newton algorithms compute low-rank updates of the matrix associated with the Newton systems, instead of computing it from scratch at every iteration. … Read more

Explicit convex hull description of bivariate quadratic sets with indicator variables

\(\) We consider the nonconvex set \(S_n = \{(x,X,z): X = x x^T, \; x (1-z) =0,\; x \geq 0,\; z \in \{0,1\}^n\}\), which is closely related to the feasible region of several difficult nonconvex optimization problems such as the best subset selection and constrained portfolio optimization. Utilizing ideas from convex analysis and disjunctive programming, … Read more

On Optimal Universal First-Order Methods for Minimizing Heterogeneous Sums

This work considers minimizing a convex sum of functions, each with potentially different structure ranging from nonsmooth to smooth, Lipschitz to non-Lipschitz. Nesterov’s universal fast gradient method provides an optimal black-box first-order method for minimizing a single function that takes advantage of any continuity structure present without requiring prior knowledge. In this paper, we show … Read more