Generalized Self-Concordant Analysis of Frank-Wolfe algorithms

Projection-free optimization via different variants of the Frank-Wolfe (FW) method has become one of the cornerstones in large scale optimization for machine learning and computational statistics. Numerous applications within these fields involve the minimization of functions with self-concordance like properties. Such generalized self-concordant (GSC) functions do not necessarily feature a Lipschitz continuous gradient, nor are … Read more

On the linear convergence of the forward-backward splitting algorithm

In this paper, we establish a linear convergence result for the forward-backward splitting algorithm in the finding a zero of the sum of two maximal monotone operators, where one of them is set-valued strongly monotone and the other is Lipschitz continuous. We show that our convergence rate is better than Douglas–Rachford splitting algorithm’s rate used … Read more

Largest small polygons: A sequential convex optimization approach

A small polygon is a polygon of unit diameter. The maximal area of a small polygon with $n=2m$ vertices is not known when $m\ge 7$. Finding the largest small $n$-gon for a given number $n\ge 3$ can be formulated as a nonconvex quadratically constrained quadratic optimization problem. We propose to solve this problem with a … Read more

On strong duality, theorems of the alternative, and projections in conic optimization

A conic program is the problem of optimizing a linear function over a closed convex cone intersected with an affine preimage of another cone. We analyse three constraint qualifications, namely a Closedness CQ, Slater CQ, and Boundedness CQ (also called Clark- Duffin theorem), that are sufficient for achieving strong duality and show that the first … Read more

Dual optimal design and the Christoffel-Darboux polynomial

The purpose of this short note is to show that the Christoffel-Darboux polynomial, useful in approximation theory and data science, arises naturally when deriving the dual to the problem of semi-algebraic D-optimal experimental design in statistics. It uses only elementary notions of convex analysis. ArticleDownload View PDF

Robust Convex Optimization: A New Perspective That Unifies And Extends

Robust convex constraints are difficult to handle, since finding the worst-case scenario is equivalent to maximizing a convex function. In this paper, we propose a new approach to deal with such constraints that unifies approaches known in the literature and extends them in a significant way. The extension is either obtaining better solutions than the … Read more

Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: a learner wishes to learn an agent’s private utility function through observing the agent’s utility-maximizing actions in a changing environment. We adopt an online inverse optimization setup, where the learner observes a stream of agent’s actions in an online fashion and the learning performance … Read more

Learning Dynamical Systems with Side Information

We present a mathematical and computational framework for the problem of learning a dynamical system from noisy observations of a few trajectories and subject to side information. Side information is any knowledge we might have about the dynamical system we would like to learn besides trajectory data. It is typically inferred from domain-specific knowledge or … Read more

Finding the strongest stable massless column with a follower load and relocatable concentrated masses

We consider the problem of optimal placement of concentrated masses along a massless elastic column that is clamped at one end and loaded by a nonconservative follower force at the free end. The goal is to find the largest possible interval such that the variation in the loading parameter within this interval preserves stability of … Read more

Iteration-complexity of a proximal augmented Lagrangian method for solving nonconvex composite optimization problems with nonlinear convex constraints

This paper proposes and analyzes a proximal augmented Lagrangian (NL-IAPIAL) method for solving smooth nonconvex composite optimization problems with nonlinear K-convex constraints, i.e., the constraints are convex with respect to the order given by a closed convex cone K. Each NL-IAPIAL iteration consists of inexactly solving a proximal augmented Lagrangian subproblem by an accelerated composite … Read more