Automorphisms of rank-one generated hyperbolicity cones and their derivative relaxations

A hyperbolicity cone is said to be rank-one generated (ROG) if all its extreme rays have rank one, where the rank is computed with respect the underlying hyperbolic polynomial. This is a natural class of hyperbolicity cones which are strictly more general than the ROG spectrahedral cones. In this work, we present a study of … Read more

Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach

In the development of first-order methods for smooth (resp., composite) convex optimization problems minimizing smooth functions, the gradient (resp., gradient mapping) norm is a fundamental optimality measure for which a regularization technique of first-order methods is known to be nearly optimal. In this paper, we report an adaptive regularization approach attaining this iteration complexity without … Read more

The automorphism group and the non-self-duality of p-cones

In this paper, we determine the automorphism group of the p-cones (p\neq 2) in dimension greater than two. In particular, we show that the automorphism group of those p-cones are the positive scalar multiples of the generalized permutation matrices that fix the main axis of the cone. Next, we take a look at a problem … Read more

The p-cones in dimension n>=3 are not homogeneous when p \neq 2

Using the T-algebra machinery we show that the only strictly convex homogeneous cones in R^n with n >= 3 are the 2-cones, also known as Lorentz cones or second order cones. In particular, this shows that the p-cones are not homogeneous when p is not 2, 1 < p <\infty and n >= 3, thus … Read more

A bound on the Carathéodory number

The Carathéodory number k(K) of a pointed closed convex cone K is the minimum among all the k for which every element of K can be written as a nonnegative linear combination of at most k elements belonging to extreme rays. Carathéodory’s Theorem gives the bound k(K) <= dim (K). In this work we observe … Read more

New results on subgradient methods for strongly convex optimization problems with a unified analysis

We develop subgradient- and gradient-based methods for minimizing strongly convex functions under a notion which generalizes the standard Euclidean strong convexity. We propose a unifying framework for subgradient methods which yields two kinds of methods, namely, the Proximal Gradient Method (PGM) and the Conditional Gradient Method (CGM), unifying several existing methods. The unifying framework provides … Read more

A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework

We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which … Read more