## Status Determination by Interior-Point Methods for Convex Optimization Problems in Domain-Driven Form

We study the geometry of convex optimization problems given in a Domain-Driven form and categorize possible statuses of these problems using duality theory. Our duality theory for the Domain-Driven form, which accepts both conic and non-conic constraints, lets us determine and certify statuses of a problem as rigorously as the best approaches for conic formulations … Read more

## Avoiding redundant columns by adding classical Benders cuts to column generation subproblems

When solving the linear programming (LP) relaxation of a mixed-integer program (MIP) with column generation, columns might be generated that are not needed to express any integer optimal solution of the MIP. Such columns are called strongly redundant and the dual bound obtained by solving the LP relaxation is potentially stronger if these columns are … Read more

## Clairvoyant Restarts in Branch-and-Bound Search Using Online Tree-Size Estimation

We propose a simple and general online method to measure the search progress within the Branch-and-Bound algorithm, from which we estimate the size of the remaining search tree. We then show how this information can help solvers algorithmically at runtime by designing a restart strategy for Mixed-Integer Programming (MIP) solvers that decides whether to restart … Read more

## High-Order Evaluation Complexity for Convexly-Constrained Optimization with Non-Lipschitzian Group Sparsity Terms

This paper studies high-order evaluation complexity for partially separable convexly-constrained optimization involving non-Lipschitzian group sparsity terms in a nonconvex objective function. We propose a partially separable adaptive regularization algorithm using a $p$-th order Taylor model and show that the algorithm can produce an (epsilon,delta)-approximate q-th-order stationary point in at most O(epsilon^{-(p+1)/(p-q+1)}) evaluations of the objective … Read more

## Minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most O(|log(epsilon)|.epsilon^{-2}) evaluations of the problem’s functions and their derivatives for finding an $\epsilon$-approximate first-order stationary point. This complexity bound therefore generalizes that provided by [Bellavia, Gurioli, … Read more

## Robust sample average approximation with small sample sizes

We consider solving stochastic optimization problems in which we seek to minimize the expected value of an objective function with respect to an unknown distribution of random parameters. Our focus is on models that use sample average approximation (SAA) with small sample sizes. We analyse the out-of-sample performance of solutions obtained by solving a robust … Read more

## A Framework for Peak Shaving Through the Coordination of Smart Homes

In demand–response programs, aggregators balance the needs of generation companies and end-users. This work proposes a two-phase framework that shaves the aggregated peak loads while maintaining the desired comfort level for users. In the first phase, the users determine their planned consumption. For the second phase, we develop a bilevel model with mixed-integer variables and … Read more

## Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron

Modern machine learning focuses on highly expressive models that are able to fit or interpolate the data completely, resulting in zero training loss. For such models, we show that the stochastic gradients of common loss functions satisfy a strong growth condition. Under this condition, we prove that constant step-size stochastic gradient descent (SGD) with Nesterov … Read more

## Weak subgradient algorithm for solving nonsmooth nonconvex unconstrained optimization problems

This paper presents a weak subgradient based method for solving nonconvex unconstrained optimization problems. The method uses a weak subgradient of the objective function at a current point, to generate a new one at every iteration. The concept of the weak subgradient is based on the idea of using supporting cones to the graph of … Read more

## An optimal control theory for accelerated optimization

Accelerated optimization algorithms can be generated using a double-integrator model for the search dynamics imbedded in an optimal control problem. Citation unpublished Article Download View An optimal control theory for accelerated optimization