Resumen
State-of-the-art methods for metonymy resolution (MR) consider the sentential context by modeling the entire sentence. However, entity representation, or syntactic structure that are informative may be beneficial for identifying metonymy. Other approaches only using deep neural network fail to capture such information. To leverage both entity and syntax constraints, this paper proposes a robust model EBAGCN for metonymy resolution. First, this work extracts syntactic dependency relations under the guidance of syntactic knowledge. Then the work constructs a neural network to incorporate both entity representation and syntactic structure into better resolution representations. In this way, the proposed model alleviates the impact of noisy information from entire sentences and breaks the limit of performance on the complicated texts. Experiments on the SemEval and ReLocaR dataset show that the proposed model significantly outperforms the state-of-the-art method BERT by more than 4%. Ablation tests demonstrate that leveraging these two types of constraints benefits fine pre-trained language models in the MR task.