Counterfactual Explanations for Integer Optimization Problems

Counterfactual explanations (CEs) offer a human-understandable way to explain decisions by identifying specific changes to the input parameters of a base or present model that would lead to a desired change in its outcome. For optimization models, CEs have primarily been studied in limited contexts, such as linear optimization problems with continuous decision variables or … Read more

Active-set Newton-MR methods for nonconvex optimization problems with bound constraints

This paper presents active-set methods for minimizing nonconvex twice-continuously differentiable functions subject to bound constraints. Within the faces of the feasible set, we employ descent methods with Armijo line search, utilizing approximated Newton directions obtained through the Minimum Residual (MINRES) method. To escape the faces, we investigate the use of the Spectral Projected Gradient (SPG) … Read more

Recursive Bound-Constrained AdaGrad with Applications to Multilevel and Domain Decomposition Minimization

Two OFFO (Objective-Function Free Optimization) noise tolerant algorithms are presented that handle bound constraints, inexact gradients and use second-order information when available. The first is a multi-level method exploiting a hierarchical description of the problem and the second is a domain-decomposition method covering the standard addditive Schwarz decompositions. Both are generalizations of the first-order AdaGrad … Read more

Swapping objectives accelerates Davis-Yin splitting

In this work, we investigate the application of Davis–Yin splitting (DYS) to convex optimization problems and demonstrate that swapping the roles of the two nonsmooth convex functions can result in a faster convergence rate. Such a swap typically yields a different sequence of iterates, but its impact on convergence behavior has been largely understudied or … Read more

Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization

ADAGB2, a generalization of the Adagrad algorithm for stochastic optimization is introduced, which is also applicable to bound-constrained problems and capable of using second-order information when available. It is shown that, given  delta in (0,1) and epsilon in (0,1], the ADAGB2 algorithm needs at most O(epsilon^{-2}) iterations to ensure an epsilon-approximate first-order critical point of … Read more

Mean and variance estimation complexity in arbitrary distributions via Wasserstein minimization

Parameter estimation is a fundamental challenge in machine learning, crucial for tasks such as neural network weight fitting and Bayesian inference. This paper focuses on the complexity of estimating translation μ∈R^l and shrinkage σ∈R++ parameters for a distribution of the form (1/sigma^l) f_0((x−μ)/σ), where f_0 is a known density in R^l given n samples. We … Read more

A general merit function-based global convergent framework for nonlinear optimization

In this paper, we revisit the convergence theory of the inexact restoration paradigm for non-linear optimization. The paper first identifies the basic elements of a globally convergent method based on merit functions. Then, the inexact restoration method that employs a two-phase iteration is introduced as a special case. A specific implementation is presented that is … Read more

Mixed-Integer Bilevel Optimization with Nonconvex Quadratic Lower-Level Problems: Complexity and a Solution Method

We study bilevel problems with a convex quadratic mixed-integer upper-level, integer linking variables, and a nonconvex quadratic, purely continuous lower-level problem. We prove $\Sigma_p^2$-hardness of this class of problems, derive an iterative lower- and upper-bounding scheme, and show its finiteness and correctness in the sense that it computes globally optimal points or proves infeasibility of … Read more

Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more