Using Mechanical Turk to Build Machine Translation Evaluation Sets
Using Mechanical Turk to Build Machine Translation Evaluation Sets
Loading...
Files
Publication or External Link
Date
2010-06
Advisor
Citation
Michael Bloodgood and Chris Callison-Burch. 2010. Using mechanical turk to build machine translation evaluation sets. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 208-211, Los Angeles, California, June. Association for Computational Linguistics.
DRUM DOI
Abstract
Building machine translation (MT) test sets is a relatively expensive task. As MT becomes increasingly desired for more and more language pairs and more and more domains, it becomes necessary to build test sets for each case. In this paper, we investigate using Amazon’s Mechanical Turk (MTurk) to make MT test sets cheaply. We find that MTurk can be used to make test sets much cheaper than professionally-produced test sets. More importantly, in experiments with multiple MT systems, we find that the MTurk-produced test sets yield essentially the same conclusions regarding system performance as the professionally-produced test sets yield.