A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems

In this paper, a robust sequential quadratic programming method of Burke and Han (Math Programming, 1989)  for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality constraints. A stochastic line search scheme in Paquette and Scheinberg (SIOPT, 2020) is employed to globalize the steps. We show that in the case … Read more

Multilevel Objective-Function-Free Optimization with an Application to Neural Networks Training

A class of multi-level algorithms for unconstrained nonlinear optimization is presented which does not require the evaluation of the objective function. The class contains the momentum-less AdaGrad method as a particular (single-level) instance. The choice of avoiding the evaluation of the objective function is intended to make the algorithms of the class less sensitive to … Read more

MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization

We study the combination of proximal gradient descent with multigrid for solving a class of possibly nonsmooth strongly convex optimization problems. We propose a multigrid proximal gradient method called MG-Prox, which accelerates the proximal gradient method by multigrid, based on using hierarchical information of the optimization problem. MGProx applies a newly introduced adaptive restriction operator … Read more

Projection free methods on product domains

Projection-free block-coordinate methods avoid high computational cost per iteration and at the same time exploit the particular problem structure of product domains. Frank-Wolfe-like approaches rank among the most popular ones of this type. However, as observed in the literature, there was a gap between the classical Frank-Wolfe theory and the block-coordinate case. Moreover, most of … Read more

Force-Controlled Pose Optimization and Trajectory Planning for Chained Stewart Platforms

We study optimization methods applied to minimizing forces for poses and movements of chained Stewart platforms (SPs) that we call an “Assembler” Robot. These chained SPs are parallel mechanisms that are stronger, stiffer, and more precise, on average, than their serial counterparts at the cost of a smaller range of motion. Linking these units in … Read more

On the paper “Augmented Lagrangian algorithms for solving the continuous nonlinear resource allocation problem”

In the paper [Torrealba, E.M.R. et al. Augmented Lagrangian algorithms for solving the continuous nonlinear resource allocation problem. EJOR, 299(1) 46–59, 2021] an augmented Lagrangian algorithm was proposed for resource allocation problems with the intriguing characteristic that instead of solving the box-constrained augmented Lagrangian subproblem, they propose projecting the solution of the unconstrained subproblem onto … Read more

A classification method based on a cloud of spheres

\(\) In this article we propose a binary classification model to distinguish a specific class that corresponds to a characteristic that we intend to identify (fraud, spam, disease). The classification model is based on a cloud of spheres that circumscribes the points of the class to be identified. It is intended to build a model … Read more

Enhancements of Discretization Approaches for Non-Convex Mixed-Integer Quadratically Constraint Quadratic Programming: Part II

Abstract. This is Part II of a study on mixed-integer programming (MIP) relaxation techniques for the solution of non-convex mixed-integer quadratically constrained quadratic programs (MIQCQPs). We set the focus on MIP relaxation methods for non-convex continuous variable products and extend the well-known MIP relaxation normalized multiparametric disaggregation technique (NMDT), applying a sophisticated discretization to both … Read more

On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization

In this paper we propose a new algorithm for solving a class of nonsmooth nonconvex problems, which is obtained by combining the iteratively reweighted scheme with a finite number of forward–backward iterations based on a linesearch procedure. The new method overcomes some limitations of linesearch forward–backward methods, since it can be applied also to minimize … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more