Resumen
With the proliferation of video surveillance system deployment and related applications, real-time video analysis is very critical to achieving intelligent monitoring, autonomous driving, etc. Analyzing video stream with high accuracy and low latency through the traditional cloud computing represents a non-trivial problem. In this paper, we propose a non-orthogonal multiple access (NOMA)-based edge real-time video analysis framework with one edge server (ES) and multiple user equipments (UEs). A cost minimization problem composed of delay, energy and accuracy is formulated to improve the quality of experience (QoE) of the UEs. In order to efficiently solve this problem, we propose the joint video frame resolution scaling, task offloading, and resource allocation algorithm based on the Deep Q-Learning Network (JVFRS-TO-RA-DQN), which effectively overcomes the sparsity of the single-layer reward function and accelerates the training convergence speed. JVFRS-TO-RA-DQN consists of two DQN networks to reduce the curse of dimensionality, which, respectively, select the offloading and resource allocation action, as well as the resolution scaling action. The experimental results show that JVFRS-TO-RA-DQN can effectively reduce the cost of edge computing and has better performance in terms of convergence compared to other baseline schemes.