Resumen
Aiming at addressing the inability of traditional web technologies to effectively respond to Winter-Olympics-related user questions containing multiple intentions, this paper explores a multi-model fusion-based multi-intention recognition model BCNBLMATT to solve this problem. The model is proposed to address the characteristics of complex semantics, strong contextual relevance, and a large number of informative features of the Chinese problem text related to the Winter Olympics, as well as the limitations of the traditional word vector model, such as insufficient expression in the textual representation and the relative concern mechanism of feature expression. The BCNBLMATT model first obtains a comprehensive feature vector representation of the problem text through BERT. Then, a multi-scale text convolutional neural network model and a BiLSTM-Multi-heads attention model (a joint model combining a bidirectional long- and short-term attention network with a multi-head attention mechanism) are used to capture local features at more scales and contextually critical information features at more levels. Finally, the two obtained kinds of features are concatenated and fused to obtain richer and more comprehensive information about the problem text features, which improves the model?s performance in the multi-attention recognition task. Comparative experiments on the Winter Olympics Chinese question dataset and the MixATIS question dataset show that the BCNBLMATT model significantly improves the three metrics of macro-averaged precision, macro-averaged recall, and macro-averaged F1 value and exhibits better generalization. This study provides an effective solution to the multi-intent recognition task for Winter Olympic problems, overcomes the limitations of traditional models, and provides new ideas for improving the performance of multi-intent recognition.