On the Convergence and Properties of a Proximal-Gradient Method on Hadamard Manifolds

In this paper, we address composite optimization problems on Hadamard manifolds, where the objective function is given by the sum of a smooth term (not necessarily convex) and a convex term (not necessarily differentiable). To solve this problem, we develop a proximal gradient method defined directly on the manifold, employing a strategy that enforces monotonicity … Read more

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more

A Minimalist Bayesian Framework for Stochastic Optimization

The Bayesian paradigm offers principled tools for sequential decision-making under uncertainty, but its reliance on a probabilistic model for all parameters can hinder the incorporation of complex structural constraints. We introduce a minimalist Bayesian framework that places a prior only on the component of interest, such as the location of the optimum. Nuisance parameters are … Read more

Properties of Enclosures in Multiobjective Optimization

A widely used approximation concept in multiobjective optimization is the concept of enclosures. These are unions of boxes defined by lower and upper bound sets that are used to cover optimal sets of multiobjective optimization problems in the image space. The width of an enclosure is taken as a quality measure. In this paper, we … Read more

A second-order cone representable class of nonconvex quadratic programs

We consider the problem of minimizing a sparse nonconvex quadratic function over the unit hypercube. By developing an extension of the Reformulation Linearization Technique (RLT) to continuous quadratic sets, we propose a novel second-order cone (SOC) representable relaxation for this problem. By exploiting the sparsity of the quadratic function, we establish a sufficient condition under … Read more

ASPEN: An Additional Sampling Penalty Method for Finite-Sum Optimization Problems with Nonlinear Equality Constraints

We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one … Read more

Solving MINLPs to global optimality with FICO Xpress Global

We present the architecture and central parts of the FICO Xpress Global optimization solver. In particular, we focus on how we built a global solver for the general class of mixed-integer nonlinear optimization problems by combining and extending two existing components of the FICO Xpress Solver, namely the mixed-integer linear optimization solver and the successive … Read more

The complete edge relaxation for binary polynomial optimization

We consider the multilinear polytope defined as the convex hull of the feasible region of a linearized binary polynomial optimization problem. We define a relaxation in an extended space for this polytope, which we refer to as the complete edge relaxation. The complete edge relaxation is stronger than several well-known relaxations of the multilinear polytope, … Read more

A new insight on the prediction-correction framework with applications to first-order methods

The prediction-correction framework developed in [B. He, Splitting Contraction Algorithm for Convex Optimization, Science Press, 2025] is a simple yet powerful tool for analyzing the convergence of diverse first-order optimization methods, including the Augmented Lagrangian Method (ALM) and the Alternating Direction Method of Multipliers (ADMM). In this paper, we propose a generalized prediction-correction framework featuring … Read more

Relaxations of KKT Conditions do not Strengthen Finite RLT and SDP-RLT Bounds for Nonconvex Quadratic Programs

We consider linear and semidefinite programming relaxations of nonconvex quadratic programs given by the reformulation-linearization technique (RLT relaxation), and the Shor relaxation combined with the RLT relaxation (SDP-RLT relaxation). By incorporating the first-order optimality conditions, a quadratic program can be formulated as an optimization problem with complementarity constraints. We investigate the effect of incorporating optimality … Read more