|
|
|
Atnafu Lambebo Tonja, Olga Kolesnikova, Alexander Gelbukh and Grigori Sidorov
Despite the many proposals to solve the neural machine translation (NMT) problem of low-resource languages, it continues to be difficult. The issue becomes even more complicated when few resources cover only a single domain. In this paper, we discuss the...
ver más
|
|
|
|
|
|
|
Jungha Son and Boyoung Kim
The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on t...
ver más
|
|
|
|
|
|
|
Célia Tavares, Luciana Oliveira, Pedro Duarte and Manuel Moreira da Silva
According to a recent study by OpenAI, Open Research, and the University of Pennsylvania, large language models (LLMs) based on artificial intelligence (AI), such as generative pretrained transformers (GPTs), may have potential implications for the job m...
ver más
|
|
|
|
|
|
|
Zohreh Madhoushi, Abdul Razak Hamdan and Suhaila Zainudin
Advancements in text representation have produced many deep language models (LMs), such as Word2Vec and recurrent-based LMs. However, there are scarce works that focus on detecting implicit sentiments with a small amount of labelled data because there ar...
ver más
|
|
|
|
|
|
|
Séamus Lankford, Haithem Afli and Andy Way
In this study, a human evaluation is carried out on how hyperparameter settings impact the quality of Transformer-based Neural Machine Translation (NMT) for the low-resourced English?Irish pair. SentencePiece models using both Byte Pair Encoding (BPE) an...
ver más
|
|
|
|
|
|
|
Arda Tezcan and Bram Bulté
Previous research has shown that simple methods of augmenting machine translation training data and input sentences with translations of similar sentences (or fuzzy matches), retrieved from a translation memory or bilingual corpus, lead to considerable i...
ver más
|
|
|
|
|
|
|
Yan Zeng, Jiyang Wu, Jilin Zhang, Yongjian Ren and Yunquan Zhang
Deep learning, with increasingly large datasets and complex neural networks, is widely used in computer vision and natural language processing. A resulting trend is to split and train large-scale neural network models across multiple devices in parallel,...
ver más
|
|
|
|
|
|
|
Arda Tezcan, Bram Bulté and Bram Vanroy
We identify a number of aspects that can boost the performance of Neural Fuzzy Repair (NFR), an easy-to-implement method to integrate translation memory matches and neural machine translation (NMT). We explore various ways of maximising the added value o...
ver más
|
|
|
|
|
|
|
Yajuan Wang, Xiao Li, Yating Yang, Azmat Anwar and Rui Dong
Both the statistical machine translation (SMT) model and neural machine translation (NMT) model are the representative models in Uyghur?Chinese machine translation tasks with their own merits. Thus, it will be a promising direction to combine the advanta...
ver más
|
|
|
|
|
|
|
Rebecca Webster, Margot Fonteyne, Arda Tezcan, Lieve Macken and Joke Daems
Due to the growing success of neural machine translation (NMT), many have started to question its applicability within the field of literary translation. In order to grasp the possibilities of NMT, we studied the output of the neural machine system of Go...
ver más
|
|
|
|