Markov Decision Process Design: A Framework for Integrating Strategic and Operational Decisions

We consider the problem of optimally designing a system for repeated use under uncertainty. We develop a modeling framework that integrates design and operational phases, which are represented by a mixed-integer program and discounted-cost infinite-horizon Markov decision processes, respectively. We seek to simultaneously minimize the design costs and the subsequent expected operational costs. This problem … Read more

A subspace inertial method for derivative-free nonlinear monotone equations

We introduce SILSA, a subspace inertial line search algorithm, for finding solutions of nonlinear monotone equations (NME). At each iteration, a new point is generated in a subspace generated by the previous points. Of all finite points forming the subspace, a point with the largest residual norm is replaced by the new point to update … Read more

Balancing Communication and Computation in Gradient Tracking Algorithms for Decentralized Optimization

Gradient tracking methods have emerged as one of the most popular approaches for solving decentralized optimization problems over networks. In this setting, each node in the network has a portion of the global objective function, and the goal is to collectively optimize this function. At every iteration, gradient tracking methods perform two operations (steps): (1) … Read more

Proximal bundle methods for hybrid weakly convex composite optimization problems

This paper establishes the iteration-complexity of proximal bundle methods for solving hybrid (i.e., a blend of smooth and nonsmooth) weakly convex composite optimization (HWC-CO) problems. This is done in a unified manner by considering a proximal bundle framework (PBF) based on a generic bundle update scheme which includes various well-known bundle update schemes. In contrast … Read more

Polyhedral Properties of RLT Relaxations of Nonconvex Quadratic Programs and Their Implications on Exact Relaxations

We study linear programming relaxations of nonconvex quadratic programs given by the reformulation-linearization technique (RLT), referred to as RLT relaxations. We investigate the relations between the polyhedral properties of the feasible regions of a quadratic program and its RLT relaxation. We establish various connections between recession directions, boundedness, and vertices of the two feasible regions. … Read more

Test Instances for Multiobjective Mixed-Integer Nonlinear Optimization

A suitable set of test instances, also known as benchmark problems, is a key ingredient to systematically evaluate numerical solution algorithms for a given class of optimization problems. While in recent years several solution algorithms for the class of multiobjective mixed-integer nonlinear optimization problems have been proposed, there is a lack of a well-established set … Read more

An Explicit Three-Term Polak-Ribière-Polyak Conjugate Gradient Method for Bicriteria Optimization

We propose in this paper a Polak-Ribière-Polyak conjugate gradient type method for solving bicriteria optimization problems by avoiding scalarization techniques. Two particular advantages in this contribution are to be noted. First, the suggested descent direction common to both criteria may be directly computed by a given formula without solving any intermediate subproblem. Second, the descent … Read more

Solving low-rank semidefinite programs via manifold optimization

We propose a manifold optimization approach to solve linear semidefinite programs (SDP) with low-rank solutions. This approach incorporates the augmented Lagrangian method and the Burer-Monteiro factorization, and features the adaptive strategies for updating the factorization size and the penalty parameter. We prove that the present algorithm can solve SDPs to global optimality, despite of the … Read more

On a Frank-Wolfe Approach for Abs-smooth Functions

We propose an algorithm which appears to be the first bridge between the fields of conditional gradient methods and abs-smooth optimization. Our problem setting is motivated by various applications that lead to nonsmoothness, such as $\ell_1$ regularization, phase retrieval problems, or ReLU activation in machine learning. To handle the nonsmoothness in our problem, we propose … Read more

Effective matrix adaptation strategy for noisy derivative-free optimization

In this paper, we introduce a new effective matrix adaptation evolution strategy (MADFO) for noisy derivative-free optimization problems. Like every MAES solver, MADFO consists of three phases: mutation, selection and recombination. MADFO improves the mutation phase by generating good step sizes, neither too small nor too large, that increase the probability of selecting mutation points … Read more