Redirigiendo al acceso original de articulo en 15 segundos...
Inicio  /  Applied Sciences  /  Vol: 13 Par: 19 (2023)  /  Artículo
ARTÍCULO
TITULO

A Combined Semantic Dependency and Lexical Embedding RoBERTa Model for Grid Field Relational Extraction

Qi Meng    
Xixiang Zhang    
Yun Dong    
Yan Chen and Dezhao Lin    

Resumen

Relationship extraction is a crucial step in the construction of a knowledge graph. In this research, the grid field entity relationship extraction was performed via a labeling approach that used span representation. The subject entity and object entity were used as training instances to bolster the linkage between them. The embedding layer of the RoBERTa pre-training model included word embedding, position embedding, and paragraph embedding information. In addition, semantic dependency was introduced to establish an effective linkage between different entities. To facilitate the effective linkage, an additional lexically labeled embedment was introduced to empower the model to acquire more profound semantic insights. After obtaining the embedding layer, the RoBERTa model was used for multi-task learning of entities and relations. The multi-task information was then fused using the parameter hard sharing mechanism. Finally, after the layer was fully connected, the predicted entity relations were obtained. The approach was tested on a grid field dataset created for this study. The obtained results demonstrated that the proposed model has high performance.