A second-order cone representable class of nonconvex quadratic programs

We consider the problem of minimizing a sparse nonconvex quadratic function over the unit hypercube. By developing an extension of the Reformulation Linearization Technique (RLT) to continuous quadratic sets, we propose a novel second-order cone (SOC) representable relaxation for this problem. By exploiting the sparsity of the quadratic function, we establish a sufficient condition under … Read more

ASPEN: An Additional Sampling Penalty Method for Finite-Sum Optimization Problems with Nonlinear Equality Constraints

We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one … Read more

Solving MINLPs to global optimality with FICO Xpress Global

We present the architecture and central parts of the FICO Xpress Global optimization solver. In particular, we focus on how we built a global solver for the general class of mixed-integer nonlinear optimization problems by combining and extending two existing components of the FICO Xpress Solver, namely the mixed-integer linear optimization solver and the successive … Read more

The complete edge relaxation for binary polynomial optimization

We consider the multilinear polytope defined as the convex hull of the feasible region of a linearized binary polynomial optimization problem. We define a relaxation in an extended space for this polytope, which we refer to as the complete edge relaxation. The complete edge relaxation is stronger than several well-known relaxations of the multilinear polytope, … Read more

A new insight on the prediction-correction framework with applications to first-order methods

The prediction-correction framework developed in [B. He, Splitting Contraction Algorithm for Convex Optimization, Science Press, 2025] is a simple yet powerful tool for analyzing the convergence of diverse first-order optimization methods, including the Augmented Lagrangian Method (ALM) and the Alternating Direction Method of Multipliers (ADMM). In this paper, we propose a generalized prediction-correction framework featuring … Read more

Relaxations of KKT Conditions do not Strengthen Finite RLT and SDP-RLT Bounds for Nonconvex Quadratic Programs

We consider linear and semidefinite programming relaxations of nonconvex quadratic programs given by the reformulation-linearization technique (RLT relaxation), and the Shor relaxation combined with the RLT relaxation (SDP-RLT relaxation). By incorporating the first-order optimality conditions, a quadratic program can be formulated as an optimization problem with complementarity constraints. We investigate the effect of incorporating optimality … Read more

A relax-fix-and-exclude algorithm for an MINLP problem with multilinear interpolations

This paper introduces a novel algorithm for Mixed-Integer Nonlinear Programming (MINLP) problems with multilinear interpolations of look-up tables. These problems arise when objectives or constraints contain black-box functions only known at a finite set of evaluations on a predefined grid. We derive a piecewise-linear relaxation for the multilinear interpolants, which require an MINLP formulation. Supported … Read more

Two-way Cutting-plane Algorithm for Best Subset Selection Considering Multicollinearity

When linear dependence exists between some explanatory variables in a regression model, the estimates of regression coefficients become unstable, thereby making the interpretation of the estimation results unreliable. To eliminate such multicollinearity, we propose a high-performance method for selecting the best subset of explanatory variables for linear and logistic regression models. Specifically, we first derive … Read more

A Framework for Explainable Knowledge Generation with Expensive Sample Evaluations

Real world problems often require complex modeling and computation efforts to be effectively addressed. Relying solely on data-driven approaches without integrating physics-based models can result in limited predictive capabilities. Even advanced techniques such as deep learning may be impractical for decision-makers due to the lack of data and challenges in justifying and explaining results. In … Read more

A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions

Optimization problems with norm-bounding constraints appear in various applications, from portfolio optimization to machine learning, feature selection, and beyond. A widely used variant of these problems relaxes the norm-bounding constraint through Lagrangian relaxation and moves it to the objective function as a form of penalty or regularization term. A challenging class of these models uses … Read more