Using Mechanical Turk to Build Machine Translation Evaluation Sets

dc.contributor.authorBloodgood, Michael
dc.contributor.authorCallison-Burch, Chris
dc.date.accessioned2014-08-06T01:17:43Z
dc.date.available2014-08-06T01:17:43Z
dc.date.issued2010-06
dc.description.abstractBuilding machine translation (MT) test sets is a relatively expensive task. As MT becomes increasingly desired for more and more language pairs and more and more domains, it becomes necessary to build test sets for each case. In this paper, we investigate using Amazon’s Mechanical Turk (MTurk) to make MT test sets cheaply. We find that MTurk can be used to make test sets much cheaper than professionally-produced test sets. More importantly, in experiments with multiple MT systems, we find that the MTurk-produced test sets yield essentially the same conclusions regarding system performance as the professionally-produced test sets yield.en_US
dc.description.sponsorshipThis research was supported by the EuroMatrix-Plus project funded by the European Commission, by the DARPA GALE program under Contract No. HR0011-06-2-0001, and the NSF under grant IIS-0713448. Thanks to Amazon Mechanical Turk for providing a $100 credit.en_US
dc.identifier.citationMichael Bloodgood and Chris Callison-Burch. 2010. Using mechanical turk to build machine translation evaluation sets. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 208-211, Los Angeles, California, June. Association for Computational Linguistics.en_US
dc.identifier.urihttp://hdl.handle.net/1903/15551
dc.language.isoen_USen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.relation.isAvailableAtCenter for Advanced Study of Language
dc.relation.isAvailableAtDigitial Repository at the University of Maryland
dc.relation.isAvailableAtUniversity of Maryland (College Park, Md)
dc.subjectcomputer scienceen_US
dc.subjectstatistical methodsen_US
dc.subjectartificial intelligenceen_US
dc.subjectcomputational linguisticsen_US
dc.subjectnatural language processingen_US
dc.subjecthuman language technologyen_US
dc.subjectmachine translationen_US
dc.subjectstatistical machine translationen_US
dc.subjectmachine translation evaluationen_US
dc.subjectcrowdsourcingen_US
dc.subjectAmazon Mechanical Turken_US
dc.subjectcost-efficient annotationen_US
dc.subjectannotation costsen_US
dc.subjectannotation bottlenecken_US
dc.subjecttranslation costsen_US
dc.subjectUrdu-English translationen_US
dc.titleUsing Mechanical Turk to Build Machine Translation Evaluation Setsen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
crowdsourcingMachineTranslationEvaluationNAACLWorkshop2010.pdf
Size:
95.38 KB
Format:
Adobe Portable Document Format