A Unified Approach for Maximizing Continuous $\gamma$-weakly DR-submodular Functions

This paper presents a unified approach for maximizing continuous \(\gamma\)-weakly DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the convex feasible region. We consider settings where the oracle provides access to either the gradient of the function or only the function value, and where the oracle access is either deterministic or stochastic. For each case, we bound the number of oracle calls (oracle complexity) needed to obtain the stated approximation guarantees to within a user-specified additive error \(\epsilon\). The paper presents novel results in several scenarios, including when only a value oracle is available over the feasible set and for non-monotone functions with \(\gamma<1\). It also improves upon existing results in other scenarios, with many of these settings being studied for the first time. Furthermore, the paper extends the results to the online setup, considering bandit feedback and semi-bandit feedback models. It provides the first regret analysis for bandit feedback in \(\gamma\)-weakly DR-submodular maximization, even for \(\gamma=1\). Additionally, it demonstrates further improvements and first-time results in various cases with semi-bandit feedback. Overall, this paper offers a comprehensive approach for maximizing \(\gamma\)-weakly DR-submodular functions, presenting novel results across different settings and extending the analysis to both bandit and semi-bandit feedback scenarios.



View A Unified Approach for Maximizing Continuous $gamma$-weakly DR-submodular Functions