On the use of third-order models with fourth-order regularization for unconstrained optimization

In a recent paper, it was shown that, for the smooth unconstrained optimization problem, worst-case evaluation complexity $O(\epsilon^{-(p+1)/p})$ may be obtained by means of algorithms that employ sequential approximate minimizations of p-th order Taylor models plus (p + 1)-th order regularization terms. The aforementioned result, which assumes Lipschitz continuity of the p-th partial derivatives, generalizes … Read more

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

The main task of genetic regulatory networks is to construct a sparse probabilistic Boolean network (PBN) based on a given transition-probability matrix and a set of Boolean networks (BNs). In this paper, a Bregman alternating direction method of multipliers (BADMM) is proposed to solve the minimization problem raised in PBN. All the customized subproblem-solvers of … Read more

A TVSCAD approach for image deblurring with impulsive noise

We consider image deblurring problem in the presence of impulsive noise. It is known that \emph{total variation} (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both … Read more

On High-order Model Regularization for Constrained Optimization

In two recent papers regularization methods based on Taylor polynomial models for minimization were proposed that only rely on H\”older conditions on the higher order employed derivatives. Grapiglia and Nesterov considered cubic regularization with a sufficient descent condition that uses the current gradient and resembles the classical Armijo’s criterion. Cartis, Gould, and Toint used Taylor … Read more

Regularized Stochastic Dual Dynamic Programming for convex nonlinear optimization problems

We define a regularized variant of the Dual Dynamic Programming algorithm called REDDP (REgularized Dual Dynamic Programming) to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) … Read more

Universal regularization methods – varying the power, the smoothness and the accuracy

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied … Read more

Regularized monotonic regression

Monotonic (isotonic) Regression (MR) is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the MR formulated … Read more

A Sparsity Preserving Convexification Procedure for Indefinite Quadratic Programs Arising in Direct Optimal Control

Quadratic programs (QP) with an indefinite Hessian matrix arise naturally in some direct optimal control methods, e.g. as subproblems in a sequential quadratic programming (SQP) scheme. Typically, the Hessian is approximated with a positive definite matrix to ensure having a unique solution; such a procedure is called \emph{regularization}. We present a novel regularization method tailored … Read more

Improved pointwise iteration-complexity of a regularized ADMM and of a regularized non-Euclidean HPE framework

This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new method is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity … Read more

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more