Resumen
Neural machine translation has achieved good translation results, but needs further improvement in low-resource and domain-specific translation. To this end, the paper proposed to incorporate source language syntactic information into neural machine translation models. Two novel approaches, namely Contrastive Language?Image Pre-training(CLIP) and Cross-attention Fusion (CAF), were compared to a base transformer model on EN?ZH and ZH?EN pair machine translation focusing on the electrical engineering domain. In addition, an ablation study on the effect of both proposed methods was presented. Among them, the CLIP pre-training method improved significantly compared with the baseline system, and the BLEU values in the EN?ZH and ZH?EN tasks increased by 3.37 and 3.18 percentage points, respectively.