We consider unconstrained minimization of a finite sum of $N$ continuously differentiable, not necessarily convex, cost functions. Several gradient-like (and more generally, line search) methods, where the full gradient (the sum of $N$ component costs' gradients) at each iteration~$k$ is replaced with an inexpensive approximation based on a sub-sample~$\mathcal N_k$ of the component costs' gradients, are available in the literature. However, a vast majority of the methods considers pre-determined (either deterministic or random) rules for selecting subsets~$\mathcal N_k$; these rules are unrelated with the actual progress of the algorithm along iterations. In this paper, we propose a very general framework for nonmonotone line search algorithms with an \emph{adaptive} choice of sub-samples~$\mathcal N_k$. Specifically, we consider master-worker architectures with one master and $N$ workers, where each worker holds one component function~$f_i$. The master maintains the solution estimate~$x_k$ and controls the states of the workers (active or inactive) through a single scalar control parameter~$p_k$. Each active worker sends to the master the value and the gradient of its component cost, while inactive workers stay idle. Parameter~$p_k$ is proportional to the expected (average) number of active workers (which equals the average sample size), and it can increase or decrease along iterations based on a computationally inexpensive estimate of the algorithm progress. Hence, through parameter~$p_k$, the master sends \emph{feedback} to the workers about the desired sample size at the next iteration. For the proposed algorithmic framework, we show that each accumulation point of sequence~$\{x_k\}$ is a stationary point for the desired cost function, almost surely. Simulations on both synthetic and real world data sets illustrate the benefits of the proposed framework with respect to the existing non-adaptive rules.

## Article

View Parallel stochastic line search methods with feedback for minimizing finite sums