Redirigiendo al acceso original de articulo en 15 segundos...
Inicio  /  Applied Sciences  /  Vol: 12 Par: 11 (2022)  /  Artículo
ARTÍCULO
TITULO

A Reinforcement Learning Based Data Caching in Wireless Networks

Muhammad Sheraz    
Shahryar Shafique    
Sohail Imran    
Muhammad Asif    
Rizwan Ullah    
Muhammad Ibrar    
Jahanzeb Khan and Lunchakorn Wuttisittikulkij    

Resumen

Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users? diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users? data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (?? ? ), and 6% performance gain in terms of backhaul saving vs. cache capacity.