A multi-agent Q-learning-based framework for achieving fairness in HTTP Adaptive StreamingA multi-agent Q-learning-based framework for achieving fairness in HTTP Adaptive Streaming
Faculty of Sciences. Mathematics and Computer Science
Modeling Of Systems and Internet Communication (MOSAIC)
S.l. :IEEE, 2014[*]2014
14th IEEE/IFIP Network Operations and Management Symposium (NOMS), May 05-09, 2014, Krakow, Poland
University of Antwerp
HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for Over-The-Top video streaming. In HAS, each video is temporally segmented and stored in different quality levels. Quality selection heuristics, deployed at the video player, allow dynamically requesting the most appropriate quality level based on the current network conditions. Today's heuristics are deterministic and static, and thus not able to perform well under highly dynamic network conditions. Moreover, in a multi-client scenario, issues concerning fairness among clients arise, meaning that different clients negatively influence each other as they compete for the same bandwidth. In this article, we propose a Reinforcement Learning-based quality selection algorithm able to achieve fairness in a multi-client setting. A key element of this approach is a coordination proxy in charge of facilitating the coordination among clients. The strength of this approach is three-fold. First, the algorithm is able to learn and adapt its policy depending on network conditions, unlike current HAS heuristics. Second, fairness is achieved without explicit communication among agents and thus no significant overhead is introduced into the network. Third, no modifications to the standard HAS architecture are required. By evaluating this novel approach through simulations, under mutable network conditions and in several multi-client scenarios, we are able to show how the proposed approach can improve system fairness up to 60% compared to current HAS heuristics.