Wednesday, December 9, 2009


Kenneth Heafield and Michael Denkowski: Features for System Combination
(This is work done as an MT lab project.)

Michael will give an update on his recent work on the METEOR MT evaluation matrix.

10 Dec 2009, Thursday, 12:00-1:30, in GHC 6501

Monday, November 9, 2009

Lori's talk

Speaker: Lori Levin
Where: GHC 6501
When: Nov 09, 2009 - Tuesday - Noon
Title: A Pendulum Swung Too Far
This paper by Ken Church deals with the never ending battle between Empiricism and Rationalism,
esp. its incarnation in NLP.
Lori will summarize and present the arguments formulated in the
paper. She will then continue with her own views on why linguistics
needs to be brought back into NLP and MT in particular.

Monday, August 10, 2009

Two talks

Talk 1:
Nguyen Bach: Source-side Dependency Tree Reordering Models with Subtree Movements and Constraints

Abstract: We propose a novel source-side dependency tree reordering model for statistical machine translation, in which subtree movements and constraints are represented as reordering events associated with the widely used lexicalized reordering models. This model allows us to not only efficiently capture the statistical distribution of the subtree-to-subtree transitions in training data, but also utilize it directly at the decoding time to guide the search process. Using subtree movements and constraints as features in a log-linear model, we are able to help the reordering models make better selections. It also allows the subtle importance of monolingual syntactic movements to be learned alongside other reordering features. We show improvements in translation quality in English-Spanish and English-Iraqi translation tasks.

This is joint work with Qin Gao and Stephan Vogel.

Talk 2:
Francisco (Paco) Guzman: Reassessment of the Role of Phrase Extraction in SMT

Abstract: In this paper we study in detail the relation between word alignment and phrase extraction. First, we analyze different word alignments according to several characteristics and compare them to hand-aligned data. Then, we analyze the phrase-pairs generated by these alignments. We observed that the number of unaligned words has a large impact on the characteristics of the phrase table. A manual evaluation of phrase pair quality showed that the increase in the number of unaligned words results in a lower quality. Finally, we present translation results from using the number of unaligned words as features from which we obtain up to 2BP of improvement.

This is joint work with Qin Gao and Stephan Vogel.

Monday, June 15, 2009

Making Disfluent Output Slightly Less So: MT System Combination Search Spaces and Optimization

Speaker: Kenneth Heafield

Title: Making Disfluent Output Slightly Less So:
MT System Combination Search Spaces and Optimization

Abstract: System combination merges several machine translation outputs
into a single improved sentence. This talk starts by summarizing the
approach including, a search space derived from the alignments, and
hypothesis scoring. The current search space focuses on picking words
in a roughly word synchronous way. Another search space under development
builds a directed graph in which aligned words correspond to a vertex and
each bigram corresponds to a directed edge. Search is conducted much like
a left-to-right MT decoder. Speed optimizations, which allow decoding at
5.5 sentences per second, apply to other MT systems in the areas of
duplicate handling, language model state, and multithreading. This speed
allows me to find hyperparameters by searching hundreds of parameter
combinations, each with a full round of tuning. In preparation for
last Friday's NIST submission, system combination improved 2.4 BLEU
points over the best component system for Urdu to English translation.

Tuesday, April 28, 2009

EBMT with external word alignment and chunk alignment

Title: EBMT with external word alignment and chunk alignment.

Who: Jae Dong Kim
When: Tuesday May 12, 12:00pm
Where: NSH 3305

Abstract: Since both EBMT and SMT are data driven methods, more accurate word alignment improves system performance in EBMT as in SMT. However, EBMT has focused on finding analogous examples while SMT has achieved plausibly accurate word alignment. For this reason, it is natural that one thinks that EBMT can benefit from using SMT word alignment. In this talk, I am going to talk about our approach to make use of more accurate external word alignment from SMT in our EBMT system. I am also going to talk about my preliminary results with chunk alignment for translation in EBMT.

Monday, April 13, 2009

Language Model Adaptation for Difficult to Translate Phrases

Presenter: Behrang Mohit
Title: Language Model Adaptation for Difficult to Translate Phrases
Date: Tuesday 12:30pm, 14 April 2009

We investigate the idea of adapting language models for phrases that
have poor translation quality. We apply a selective adaptation
criterion which uses a classifier to locate the most difficult phrase
of each source language sentence. A special adapted language model is
constructed for the highlighted phrase. Our adaptation heuristic uses
lexical features of the phrase to locate the relevant parts of the
parallel corpus for language model training. As we vary the
experimental setup by changing the size of the SMT training data, our
adaptation method consistently shows strong improvements over the
baseline systems.
This is a joint work with Frank Liberato and Rebecca Hwa.

Thursday, March 12, 2009

Moving Beyond Phrase-Pairs: Dynamically Scoring Collections of Translation Examples

Moving Beyond Phrase-Pairs: Dynamically Scoring Collections of
Translation Examples

Who: Aaron B. Phillips
When: Friday Mar 13, 12:00pm
Where: NSH 1507


Statistical Machine Translation has prospered because it is based on
models that are consistent and straightforward to optimize. The
log-linear model in particular allows the researcher to exploit
numerous, possibly dependent, features. However, the modeling approach
taken by SMT enforces a particular top-down view of the data using
phrase-pairs that does not easily allow for the integration of features
that may change from example to example. What I propose is a shift in
how the model is built. Inspired by Example-Based Machine Translation, I
calculate features for each example separately, but like SMT this
information is collected into a single log-linear model that is
straightforward to optimize. This is accomplished by identifying at
run-time the most appropriate collection of translation examples instead
of using precomputed phrase-pairs. A search is performed over each
example-specific feature such as the alignment quality, genre, or
context to determine a collection that maximizes the score. The weights
for each example-specific feature are adjustable during optimization and
allow for a trade-off between forming collections over all the examples
and forming collections that consist of a few high-quality examples.
This framework seeks to unify the approaches of EBMT and SMT. It results
in a model that is straightforward to optimize *and* allows the
integration of novel example-specific features.

Tuesday, February 3, 2009

An Overview of Tree-to-String Translation Models: Yang Liu

Speaker: Yang Liu
Title: An Overview of Tree-to-String Translation Models


Recent research on statistical machine translation has lead to the rapid development of syntax-based translation models, in which syntactic information can be exploited to direct translation. In this talk, I will give an overview of tree-to-string translation models, one of the state-of-the-art syntax-based models. In a tree-to-string model, the source side is a phrase structure parse tree and the target side is a string. This talk includes the following topics: (1) naive tree-to-string model, (2) tree-sequence based tree-to-string model, (3) context-aware tree-to-string model, and (4) forest-based tree-to-string model. Experimental results show that forest-based tree-to-string model outperforms hierarchical phrase-based model significantly.

Short Bio:

Yang Liu is an Assistant Researcher at Institute of Computing Technology, Chinese Academy of Sciences. He graduated in Computer Science from Wuhan University in 2002. He received his PhD degree in Computer Science from Institute of Computing Technology, Chinese Academy of Sciences. His major research interests include statistical machine translation and Chinese information processing. His publications on discriminative word alignment and tree-to-string models have received wide attention. He served as PC member/Reviewer for TALIP, ACL, EMNLP, AMTA, and SSST.

Tuesday, January 13, 2009

Parallel Treebanks in Machine Translation

Title: Parallel Treebanks in Machine Translation
Speaker: John Tinsley, Ph.D. student at the National Centre for Language Technology in DCU