Robust Convex Optimization: A New Perspective That Unifies And Extends

Robust convex constraints are difficult to handle, since finding the worst-case scenario is equivalent to maximizing a convex function. In this paper, we propose a new approach to deal with such constraints that unifies approaches known in the literature and extends them in a significant way. The extension is either obtaining better solutions than the … Read more

Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: a learner wishes to learn an agent’s private utility function through observing the agent’s utility-maximizing actions in a changing environment. We adopt an online inverse optimization setup, where the learner observes a stream of agent’s actions in an online fashion and the learning performance … Read more

Finding the strongest stable massless column with a follower load and relocatable concentrated masses

We consider the problem of optimal placement of concentrated masses along a massless elastic column that is clamped at one end and loaded by a nonconservative follower force at the free end. The goal is to find the largest possible interval such that the variation in the loading parameter within this interval preserves stability of … Read more

Learning Dynamical Systems with Side Information

We present a mathematical and computational framework for the problem of learning a dynamical system from noisy observations of a few trajectories and subject to side information. Side information is any knowledge we might have about the dynamical system we would like to learn besides trajectory data. It is typically inferred from domain-specific knowledge or … Read more

Iteration-complexity of a proximal augmented Lagrangian method for solving nonconvex composite optimization problems with nonlinear convex constraints

This paper proposes and analyzes a proximal augmented Lagrangian (NL-IAPIAL) method for solving smooth nonconvex composite optimization problems with nonlinear K-convex constraints, i.e., the constraints are convex with respect to the order given by a closed convex cone K. Each NL-IAPIAL iteration consists of inexactly solving a proximal augmented Lagrangian subproblem by an accelerated composite … Read more

A Modified Proximal Symmetric ADMM for Multi-Block Separable Convex Optimization with Linear Constraints

We consider the linearly constrained separable convex optimization problem whose objective function is separable w.r.t. $m$ blocks of variables. A bunch of methods have been proposed and well studied. Specifically, a modified strictly contractive Peaceman-Rachford splitting method (SC-PRCM) has been well studied in the literature for the special case of $m=3$. Based on the modified … Read more

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to … Read more

A geodesic interior-point method for linear optimization over symmetric cones

We develop a new interior-point method (IPM) for symmetric-cone optimization, a common generalization of linear, second-order-cone, and semidefinite programming. In contrast to classical IPMs, we update iterates with a geodesic of the cone instead of the kernel of the linear constraints. This approach yields a primal-dual-symmetric, scale-invariant, and line-search-free algorithm that uses just half the … Read more

A FISTA-type first order algorithm on composite optimization problems that is adaptable to the convex situation

In this note, we propose a FISTA-type first order algorithm, VAR-FISTA, to solve a composite optimization problem. A distinctive feature of VAR-FISTA is its ability to exploit the convexity of the function in the problem, resulting in an improved iteration complexity when the function is convex compared to when it is nonconvex. The iteration complexity … Read more

Split Bregman iteration for multi-period mean variance portfolio optimization

This paper investigates the problem of defining an optimal long-term investment strategy, where the investor can exit the investment before maturity without severe loss. Our setting is a multi-period one, where the aim is tomake a plan for allocating all of wealth among the n assets within a time horizon of m periods. In addition, … Read more