Decomposition algorithm for large-scale two-stage unit-commitment

Everyday, electricity generation companies submit a generation schedule to the grid operator for the coming day; computing an optimal schedule is called the unit-commitment problem. Generation companies can also occasionally submit changes to the schedule, that can be seen as intra-daily incomplete recourse actions. In this paper, we propose a two-stage formulation of unit-commitment, wherein … Read more

An optimal subgradient algorithm with subspace search for costly convex optimization problems

This paper presents an acceleration of the optimal subgradient algorithm OSGA \cite{NeuO} for solving convex optimization problems, where the objective function involves costly affine and cheap nonlinear terms. We combine OSGA with a multidimensional subspace search technique, which leads to low-dimensional problem that can be solved efficiently. Numerical results concerning some applications are reported. A … Read more

A Nonmonotone Approach without Differentiability Test for Gradient Sampling Methods

Recently, optimization problems involving nonsmooth and locally Lipschitz functions have been subject of investigation, and an innovative method known as Gradient Sampling has gained attention. Although the method has shown good results for important real problems, some drawbacks still remain unexplored. This study suggests modifications to the gradient sampling class of methods in order to … Read more

Polynomial Root Radius Optimization with Affine Constraints

The root radius of a polynomial is the maximum of the moduli of its roots (zeros). We consider the following optimization problem: minimize the root radius over monic polynomials of degree $n$, with either real or complex coefficients, subject to $k$ consistent affine constraints on the coefficients. We show that there always exists an optimal … Read more

Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds

In this paper we present two inexact proximal point algorithms to solve minimization problems for quasiconvex objective functions on Hadamard manifolds. We prove that under natural assumptions the sequence generated by the algorithms are well defined and converge to critical points of the problem. We also present an application of the method to demand theory … Read more

Nonsmooth Methods for Control Design with Integral Quadratic Constraints

We develop an optimization technique to compute local solutions to synthesis problems subject to integral quadratic constraints (IQCs). We use the fact that IQCs may be transformed into semi-infinite maximum eigenvalue constraints over the frequency axis and approach them via nonsmooth optimization methods. We develop a suitable spectral bundle method and prove its convergence in … Read more

An optimal subgradient algorithm for large-scale bound-constrained convex optimization

This paper shows that the OSGA algorithm — which uses first-order information to solve convex optimization problems with optimal complexity — can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. … Read more

An optimal subgradient algorithm for large-scale convex optimization in simple domains

This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a … Read more

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

We propose a forward-backward proximal-type algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. The sequence of iterates generated by the algorithm converges to a critical point of the objective function provided an appropriate regularization of the objective satisfies the Kurdyka-Lojasiewicz inequality, which is … Read more

Randomized First-order Methods for Saddle Point Optimization

In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a … Read more