AS-BOX: Additional Sampling Method for Weighted Sum Problems with Box Constraints

A class of optimization problems characterized by a weighted finite-sum objective function subject to box constraints is considered. We propose a novel stochastic optimization method, named AS-BOX (Additional Sampling for BOX constraints), that combines projected gradient directions with adaptive variable sample size strategies and nonmonotone line search. The method dynamically adjusts the batch size based … Read more

ASPEN: An Additional Sampling Penalty Method for Finite-Sum Optimization Problems with Nonlinear Equality Constraints

We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one … Read more

ASMOP: Additional sampling stochastic trust region method for multi-objective problems

We consider an unconstrained multi-criteria optimization problem with finite sum objective functions. The proposed algorithm belongs to a non-monotone trust-region framework where additional sampling approach is used to govern the sample size and the acceptance of a candidate point. Depending on the problem, the method can result in a mini-batch or an increasing sample size … Read more

IPAS: An Adaptive Sample Size Method for Weighted Finite Sum Problems with Linear Equality Constraints

Optimization problems with the objective function in the form of weighted sum and linear equality constraints are considered. Given that the number of local cost functions can be large as well as the number of constraints, a stochastic optimization method is proposed. The method belongs to the class of variable sample size first order methods, … Read more

Variable metric proximal stochastic gradient methods with additional sampling

Regularized empirical risk minimization problems arise in a variety of applications, including machine learning, signal processing, and image processing. Proximal stochastic gradient algorithms are a standard approach to solve these problems due to their low computational cost per iteration and a relatively simple implementation. This paper introduces a class of proximal stochastic gradient methods built … Read more

SMOP: Stochastic trust region method for multi-objective problems

The problem considered is a multi-objective optimization problem, in which the goal is to find an optimal value of a vector function representing various criteria. The aim of this work is to develop an algorithm which utilizes the trust region framework with probabilistic model functions, able to cope with noisy problems, using inaccurate functions and … Read more

Spectral Stochastic Gradient Method with Additional Sampling for Finite and Infinite Sums

In this paper, we propose a new stochastic gradient method for numerical minimization of finite sums. We also propose a modified version of this method applicable on more general problems referred to as infinite sum problems, where the objective function is in the form of mathematical expectation. The method is based on a strategy to … Read more

A Hessian inversion-free exact second order method for distributed consensus optimization

We consider a standard distributed consensus optimization problem where a set of agents connected over an undirected network minimize the sum of their individual (local) strongly convex costs. Alternating Direction Method of Multipliers (ADMM) and Proximal Method of Multipliers (PMM) have been proved to be effective frameworks for design of exact distributed second order methods … Read more