Speaking your translation: Students' first encounter with speech rcognition technology

Barbara Dragsted, Inger Margrethe Mees, Inge Gorm Hansen

Abstract


In this article we discuss the translation processes and products of 14 MA students who produced translations under different working conditions: written translation, sight translation, and sight translation with a speech recognition (SR) tool. Audio output and keystrokes were recorded and participants’ eye movements were tracked in all three modes. Oral and written translation data were examined with respect to task times, cognitive load (as evidenced by fixation durations) and the quality of the output. Although task times were found to be highest in written translation, the quality was not consistently better. Cognitive load was highest in spoken translation. In addition, we looked into the number and types of error that occurred when using the SR software.  Items that were misrecognised by the program could be divided into four categories: homophones, hesitations, incorrectly pronounced words, and limitations of the software. Well over 50% of the errors were caused by students’ mispronunciations, thus leading us to conclude that it is essential to work on features of pronunciation in order to enhance speech recognition. In general, we found that both in terms of time savings and pedagogical benefits, SR technology seems to be a promising tool for future translators.

Keywords


translation processes; oral and written translation; sight translation; speech recognition software; eye-tracking; translation quality; cognitive load; pronunciation

Full Text: PDF