An interior-point Lagrangian decomposition method for separable convex optimization

In this paper we propose a distributed algorithm for solving large-scale separable convex problems using Lagrangian dual decomposition and the interior-point framework. By adding self-concordant barrier terms to the ordinary Lagrangian we prove under mild assumptions that the corresponding family of augmented dual functions is self-concordant. This makes it possible to efficiently use the Newton … Read more

Application of a smoothing technique to decomposition in convex optimization

Dual decomposition is a powerful technique for deriving decomposition schemes for convex optimization problems with specific structure. Although the Augmented Lagrangian is computationally more stable than the ordinary Lagrangian, the \textit{prox-term} destroys the separability of the given problem. In this paper we use another approach to obtain a smooth Lagrangian, based on a smoothing technique … Read more

Risk averse feasible policies for large-scale multistage stochastic linear programs

We consider risk-averse formulations of stochastic linear programs having a structure that is common in real-life applications. Specifically, the optimization problem corresponds to controlling over a certain horizon a system whose dynamics is given by a transition equation depending affinely on an interstage dependent stochastic process. We put in place a rolling-horizon time consistent policy. … Read more

Nuclear norm minimization for the planted clique and biclique problems

We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been … Read more

Generic identifiability and second-order sufficiency in tame convex optimization

We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, “tame”). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is “partly smooth”, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality conditions … Read more

Optimality conditions for several type of efficient solutions of set-valued optimization problems

A simple unified framework is presented for the study of strong efficient solutions, weak efficient solutions, positive proper efficient solutions, Henig global proper efficient solutions, Henig proper efficient solutions, super efficient solutions, Benson proper efficient solutions, Hartley proper efficient solutions, Hurwicz proper efficient solutions and Borwein proper efficient solutions of set-valued optimization problem with/or without … Read more

Identifying Activity

Identification of active constraints in constrained optimization is of interest from both practical and theoretical viewpoints, as it holds the promise of reducing an inequality-constrained problem to an equality-constrained problem, in a neighborhood of a solution. We study this issue in the more general setting of composite nonsmooth minimization, in which the objective is a … Read more

Fejer processes with diminishing disturbances and decomposition of constrained nondifferentiable optimization problems

Iterative processes based on Fejer mappings with diminishing problem-specific shifts in the arguments are considered. Such structure allows fine-tuning of Fejer processes by directing them toward selected subsets of attracting sets. Use of various Fejer operators provides ample opportunities for decomposition and parallel computations. Subgradient projection algorithms with sequential and simultaneous projections on segmented constraints … Read more

Simultaneously solving seven optimization problems in relative scale

In this paper we develop and analyze an efficient algorithm which solves seven related optimization problems simultaneously, in relative scale. Each iteration of our method is very cheap, with main work spent on matrix-vector multiplication. We prove that if a certain sequence generated by the algorithm remains bounded, then the method must terminate in $O(1/\delta)$ … Read more

Approximate Level Method

In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising … Read more