In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is O(√αTlnK), where K is the number of actions, α is the independence number of the graph, and T is the time horizon. The √lnK factor is known to be necessary when α=1 (the experts case). On the other hand, when α=K (the bandits case), the minimax rate is known to be Θ(√KT), and a lower bound Ω(√αT) is known to hold for any α. Our improved upper bound O(√αT(1+ln(K/α))) holds for any α and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with q-Tsallis entropy for a carefully chosen value of q∈[1/2,1) that varies with α. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to time-varying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved Ω(√αT(lnK)/(lnα)) lower bound for all α>1, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as α
On the Minimax Regret for Online Learning with Feedback Graphs / Eldowa, Khaled; Esposito, Emmanuel; Cesari, Tom; Cesa-Bianchi, Nicolò. - 36:(2023), pp. 46122-46133. (Intervento presentato al convegno NeurIPS 2023, the Thirty-seventh Annual Conference on Neural Information Processing Systems tenutosi a New Orleans (USA) nel Dec 10 through Dec 16, 2023).
On the Minimax Regret for Online Learning with Feedback Graphs
Eldowa, Khaled;
2023
Abstract
In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is O(√αTlnK), where K is the number of actions, α is the independence number of the graph, and T is the time horizon. The √lnK factor is known to be necessary when α=1 (the experts case). On the other hand, when α=K (the bandits case), the minimax rate is known to be Θ(√KT), and a lower bound Ω(√αT) is known to hold for any α. Our improved upper bound O(√αT(1+ln(K/α))) holds for any α and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with q-Tsallis entropy for a carefully chosen value of q∈[1/2,1) that varies with α. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to time-varying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved Ω(√αT(lnK)/(lnα)) lower bound for all α>1, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as αFile | Dimensione | Formato | |
---|---|---|---|
NeurIPS-2023-on-the-minimax-regret-for-online-learning-with-feedback-graphs-Supplemental-Conference.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
PUBBLICO - Tutti i diritti riservati
Dimensione
420.75 kB
Formato
Adobe PDF
|
420.75 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2990221