Resumen
A deep reinforcement learning method to achieve complete coverage path planning for an unmanned surface vehicle (USV) is proposed. This paper firstly models the USV and the workspace required for complete coverage. Then, for the full-coverage path planning task, this paper proposes a preprocessing method for raster maps, which can effectively delete the blank areas that are impossible to cover in the raster map. In this paper, the state matrix corresponding to the preprocessed raster map is used as the input of the deep neural network. The deep Q network (DQN) is used to train the complete coverage path planning strategy of the agent. The improvement of the selection of random actions during training is first proposed. Considering the task of complete coverage path planning, this paper replaces random actions with a set of actions toward the nearest uncovered grid. To solve the problem of the slow convergence speed of the deep reinforcement learning network in full-coverage path planning, this paper proposes an improved method of deep reinforcement learning, which superimposes the final output layer with a dangerous actions matrix to reduce the risk of selection of dangerous actions of USVs during the learning process. Finally, the designed method validates via simulation examples.