Budgeted Multi-Armed Bandits with Asymmetric Confidence Intervals

Marco Heyden, Vadim Arzamasov, Edouard Fouché, Klemens Böhm

Published in KDD '24, 2024

We study the stochastic Budgeted Multi-Armed Bandit (MAB) problem, where a player chooses from $K$ arms with unknown expected rewards and costs. The goal is to maximize the total reward under a budget constraint. A player thus seeks to choose the arm with the highest reward-cost ratio as often as possible. Current approaches for this problem have several issues, which we illustrate. To overcome them, we propose a new upper confidence bound (UCB) sampling policy, $\omega$-UCB, that uses asymmetric confidence intervals. These intervals scale with the distance between the sample mean and the bounds of a random variable, yielding a more accurate and tight estimation of the reward-cost ratio compared to our competitors. We show that our approach has sublinear instance-dependent regret in general and logarithmic regret for parameter $\rho\geq 1$, and that it outperforms existing policies consistently in synthetic and real settings.

Access paper or have a look at our poster.

You can also watch our animated teaser video on youtube.