Wednesday, October 6, 2010

Choosing the Right Evaluation for Machine Translation

Time: Noon on Tuesday, October 12
Place: GHC 6501 (usual location)

Title: Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks
Authors: Michael Denkowski and Alon Lavie

Abstract:
This work examines the motivation, design, and practical results of several types of human evaluation tasks for machine translation. In addition to considering annotator performance and task informativeness over multiple evaluations, we explore the practicality of tuning automatic evaluation metrics to each judgment type in a comprehensive experiment using the METEOR metric. We present results showing clear advantages of tuning to certain types of judgments and discuss causes of inconsistency when tuning to various judgment data, as well as sources of difficulty in the human evaluation tasks themselves.

This work will be presented at AMTA 2010.