A Simplified Convergence Theory for Byzantine Resilient Stochastic Gradient Descent

In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified … Read more

Stochastic nested primal-dual method for nonconvex constrained composition optimization

\(\) In this paper we study the nonconvex constrained composition optimization, in which the objective contains a composition of two expected-value functions whose accurate information is normally expensive to calculate. We propose a STochastic nEsted Primal-dual (STEP) method for such problems. In each iteration, with an auxiliary variable introduced to track the inner layer function … Read more

Computing Tchebychev weight space decomposition for multiobjective discrete optimization problems

Multiobjective discrete optimization (MODO) techniques, including weight space decomposition, have received increasing attention in the last decade. The primary weight space decomposition technique in the literature is defined for the weighted sum utility function, through which sets of weights are assigned to a subset of the nondominated set. Recent work has begun to study the … Read more