Subsampled Inexact Newton methods for minimizing large sums of convex functions

This paper deals with the minimization of large sum of convex functions by Inexact Newton (IN) methods employing subsampled Hessian approximations. The Conjugate Gradient method is used to compute the inexact Newton step and global convergence is enforced by a nonmonotone line search procedure. The aim is to obtain methods with affordable costs and fast … Read more

Combining Multi-Level Real-time Iterations of Nonlinear Model Predictive Control to Realize Squatting Motions on Leo

Today’s humanoid robots are complex mechanical systems with many degrees of freedom that are built to achieve locomotion skills comparable to humans. In order to synthesize whole-body motions, real-tme capable direct methods of optimal control are a subject of contemporary research. To this end, Nonlinear Model Predictive Control is the method of choice to realize … Read more

Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs

This paper analyzes the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the objective function is of the form f + h where f is a differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with a bounded domain. … Read more

CasADi – A software framework for nonlinear optimization and optimal control

We present CasADi, an open-source software framework for numerical optimization. CasADi is a general-purpose tool that can be used to model and solve optimization problems with a large degree of flexibility, larger than what is associated with popular algebraic modeling languages such as AMPL, GAMS, JuMP or Pyomo. Of special interest are problems constrained by … Read more

Simplified Versions of the Conditional Gradient Method

We suggest simple modifications of the conditional gradient method for smooth optimization problems, which maintain the basic convergence properties, but reduce the implementation cost of each iteration essentially. Namely, we propose the step-size procedure without any line-search, and inexact solution of the direction finding subproblem. Preliminary results of computational tests confirm efficiency of the proposed … Read more

Tight-and-cheap conic relaxation for the AC optimal power flow problem

The classical alternating current optimal power flow problem is highly nonconvex and generally hard to solve. Convex relaxations, in particular semidefinite, second-order cone, convex quadratic, and linear relaxations, have recently attracted significant interest. The semidefinite relaxation is the strongest among them and is exact for many cases. However, the computational efficiency for solving large-scale semidefinite … Read more

New Constraint Qualifications with Second-Order Properties in Nonlinear Optimization

In this paper we present and discuss new constraint qualifications to ensure the validity of well known second-order properties in nonlinear optimization. Here, we discuss conditions related to the so-called basic second-order condition, where a new notion of polar pairing is introduced in order to replace the polar operation, useful in the first-order case. We … Read more

Sum of squares certificates for stability of planar, homogeneous, and switched systems

We show that existence of a global polynomial Lyapunov function for a homogeneous polynomial vector field or a planar polynomial vector field (under a mild condition) implies existence of a polynomial Lyapunov function that is a sum of squares (sos) and that the negative of its derivative is also a sum of squares. This result … Read more

A Stochastic Trust Region Algorithm Based on Careful Step Normalization

An algorithm is proposed for solving stochastic and finite sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of the stochastic gradient estimates are within a specified interval. The complete algorithm—which dynamically chooses whether or not to employ normalized steps—is proved to have … Read more

Deterministic Global Optimization with Artificial Neural Networks Embedded

Artificial neural networks (ANNs) are used in various applications for data-driven black-box modeling and subsequent optimization. Herein, we present an efficient method for deterministic global optimization of ANN embedded optimization problems. The proposed method is based on relaxations of algorithms using McCormick relaxations in a reduced-space [\textit{SIOPT}, 20 (2009), pp. 573-601] including the convex and … Read more