Redirigiendo al acceso original de articulo en 17 segundos...
Inicio  /  Informatics  /  Vol: 6 Par: 1 (2019)  /  Artículo
ARTÍCULO
TITULO

Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice

Carlos S. C. Teixeira    
Joss Moorkens    
Daniel Turner    
Joris Vreeke and Andy Way    

Resumen

Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid translation from scratch, translation using translation memories, or machine translation postediting, we developed a web-based translation editing interface that permits multimodal input via touch-enabled screens and speech recognition in addition to keyboard and mouse. The tool also conforms to web accessibility standards. This article describes the tool and its development process over several iterations. Between these iterations we carried out two usability studies, also reported here. Findings were promising, albeit somewhat inconclusive. Participants liked the tool and the speech recognition functionality. Reports of the touchscreen were mixed, and we consider that it may require further research to incorporate touch into a translation interface in a usable way.