Linguistics, Vol. 21. No. 2., 2020.
Original scientific paper
https://doi.org/10.29162/jez.2020.8
THE IMPACT OF RESPONDENTS’ MULTILINGUALISM ON HUMAN EVALUATION OF MACHINE TRANSLATION QUALITY
Sandra Ljubas
; Sveučilište u Zadru
Abstract
This paper presents a study of the impact of multilingualism on the subjective method of evaluating machine translation quality. The subjectivity of this method is usually manifested in the low level of inter-coder agreement. In this preliminary research, two groups of human judges, the first comprised of monolingual and the second of bilingual respondents, evaluated the accuracy and fluency of the same set of machine-translated text segments. The segments have been translated with Google Translate. The monolingual respondents compared the MT-generated output with a human translation and the bilingual respondents with the original text. The aim of this study was to determine how the discrepancies between monolingual and bilingual respondents shape evaluation patterns with respect to the length of the evaluation process, potential deviations in the median evaluation score and the analysis of causes influencing these evaluation discrepancies. The qualitative analysis has shown that bilingual respondents in general give lower scores to output data, but need more time to complete the evaluation process than monolinguals. However, no tendency towards a higher level of inter-coder agreement has been noted in either group of human judges.
Keywords
evaluation; machine translation; multilingualism; subjective evaluation
Hrčak ID:
242868
URI
Publication date:
26.8.2020.
Visits: 2.531 *