Resumen
With the internet developing rapidly, mobile edge computing (MEC) has been proposed to offer computational capabilities to tackle the high latency caused by innumerable data and applications. Due to limited computing resources, the innovation of computation offloading technology for an MEC system remains challenging, and can lead to transmission delays and energy consumption. This paper focuses on a task-offloading scheme for an MEC-based system where each mobile device is an independent agent and responsible for making a schedule based on delay-sensitive tasks. Nevertheless, the time-varying network dynamics and the heterogeneous features of real-time data tasks make it difficult to find an optimal solution for task offloading. Existing centralized-based or distributed-based algorithms require huge computational resources for complex problems. To address the above problem, we design a novel deep reinforcement learning (DRL)-based approach by using a parameterized indexed value function for value estimation. Additionally, the task-offloading problem is simulated as a Markov decision process (MDP) and our aim is to reduce the total delay of data processing. Experimental results have shown that our algorithm significantly promotes the users? offloading performance over traditional methods.