Resumen
Urban-safety perception is crucial for urban planning and pedestrian street preference studies. With the development of deep learning and the availability of high-resolution street images, the use of artificial intelligence methods to deal with urban-safety perception has been considered adequate by many researchers. However, most current methods are based on the feature-extraction capability of convolutional neural networks (CNNs) with large-scale annotated data for training, mainly aimed at providing a regression or classification model. There remains a lack of interpretable and complete evaluation systems for urban-safety perception. To improve the interpretability of evaluation models and achieve human-like safety perception, we proposed a complete decision-making framework based on reinforcement learning (RL). We developed a novel feature-extraction module, a scalable visual computational model based on visual semantic and functional features that could fully exploit the knowledge of domain experts. Furthermore, we designed the RL module?comprising a combination of a Markov decision process (MDP)-based street-view observation environment and an intelligent agent trained using a deep reinforcement-learning (DRL) algorithm?to achieve human-level perception abilities. Experimental results using our crowdsourced dataset showed that the framework achieved satisfactory prediction performance and excellent visual interpretability.