Resumen
Task scheduling poses a wide variety of challenges in the cloud computing paradigm, as heterogeneous tasks from a variety of resources come onto cloud platforms. The most important challenge in this paradigm is to avoid single points of failure, as tasks of various users are running at the cloud provider, and it is very important to improve fault tolerance and maintain negligible downtime in order to render services to a wide range of customers around the world. In this paper, to tackle this challenge, we precisely calculated priorities of tasks for virtual machines (VMs) based on unit electricity cost and these priorities are fed to the scheduler. This scheduler is modeled using a deep reinforcement learning technique which is known as the DQN model to make decisions and generate schedules optimally for VMs based on priorities fed to the scheduler. This research is extensively conducted on Cloudsim. In this research, a real-time dataset known as Google Cloud Jobs is used and is given as input to the algorithm. This research is carried out in two phases by categorizing the dataset as a regular or large dataset with real-time tasks with fixed and varied VMs in both datasets. Our proposed DRFTSA is compared to existing state-of-the-art approaches, i.e., PSO, ACO, and GA algorithms, and results reveal that the proposed DRFTSA minimizes makespan compared to PSO, GA, and ACO by 30.97%, 35.1%, and 37.12%, rates of failure by 39.4%, 44.13%, and 46.19%, and energy consumption by 18.81%, 23.07%, and 28.8%, respectively, for both regular and large datasets for both fixed and varied VMs.