A hybrid branch-and-bound and interior-point algorithm for stochastic mixed-integer nonlinear second-order cone programming

One of the chief attractions of stochastic mixed-integer second-order cone programming is its diverse applications, especially in engineering (Alzalg and Alioui, {\em IEEE Access}, 10:3522-3547, 2022). The linear and nonlinear versions of this class of optimization problems are still unsolved yet. In this paper, we develop a hybrid optimization algorithm coupling branch-and-bound and primal-dual interior-point … Read more

A Bilevel Optimization Approach for a Class of Combinatorial Problems with Disruptions and Probing

We consider linear combinatorial optimization problems under uncertain disruptions that increase the cost coefficients of the objective function. A decision-maker, or planner, can invest resources to probe the components (i.e., the coefficients) in order to learn their disruption status. In the proposed probing optimization problem, the planner, knowing just the disruptions’ probabilities, selects which components … Read more

Generalized asymmetric forward-backward-adjoint algorithms for convex-concave saddle-point problem

The convex-concave minimax problem, also known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity. In this paper, we propose a generalized asymmetric forward-backward-adjoint algorithm (G-AFBA) to solve such a problem by utilizing both the proximal techniques and the extrapolation of primal-dual updates. Besides applying … Read more

Limited memory gradient methods for unconstrained optimization

The limited memory steepest descent method (Fletcher, 2012) for unconstrained optimization problems stores a few past gradients to compute multiple stepsizes at once. We review this method and propose new variants. For strictly convex quadratic objective functions, we study the numerical behavior of different techniques to compute new stepsizes. In particular, we introduce a method … Read more

Adaptive Consensus: A network pruning approach for decentralized optimization

We consider network-based decentralized optimization problems, where each node in the network possesses a local function and the objective is to collectively attain a consensus solution that minimizes the sum of all the local functions. A major challenge in decentralized optimization is the reliance on communication which remains a considerable bottleneck in many applications. To … Read more

Optimal Multi-Agent Pickup and Delivery Using Branch-and-Cut-and-Price

Given a set of agents and a set of pickup-delivery requests located on a two-dimensional map, the Multi-Agent Pickup and Delivery problem assigns the requests to the agents such that every agent moves from its start location to the locations of its assigned requests and finally to its end location without colliding into any other … Read more

Robust Optimization Under Controllable Uncertainty

Applications for optimization with uncertain data in practice often feature a possibility to reduce the uncertainty at a given query cost, e.g., by conducting measurements, surveys, or paying a third party in advance to limit the deviations. To model this type of applications we introduce the concept of optimization problems under controllable uncertainty (OCU). For … Read more

The Algebraic Structure of the Nonconvex Second-Order Cone

This paper explores the nonconvex second-order cone as a nonconvex conic extension of the known convex second-order cone in optimization, as well as a higher-dimensional conic extension of the known causality cone in relativity. The nonconvex second-order cone can be used to reformulate nonconvex quadratic programming and nonconvex quadratically constrained quadratic program in conic format. … Read more

Inexact Newton methods with matrix approximation by sampling for nonlinear least-squares and systems

We develop and analyze stochastic inexact Gauss-Newton methods for nonlinear least-squares problems and inexact Newton methods for nonlinear systems of equations. Random models are formed using suitable sampling strategies for the matrices involved in the deterministic models. The analysis of the expected number of iterations needed in the worst case to achieve a desired level … Read more

Sufficient Conditions for Lipschitzian Error Bounds for Complementarity Systems

We are concerned with Lipschitzian error bounds and Lipschitzian stability properties for solutions of a complementarity system. For this purpose, we deal with a nonsmooth slack-variable reformulation of the complementarity system, and study conditions under which the reformulation serves as a local error bound for the solution set of the complementarity system. We also discuss … Read more