Date: Apr 17, 2007
Presenter: Lori Levin and Alison Alvarez
Title: An Assessment of Language Elicitation without the Supervision of a Linguist
We created an elicitation corpus designed to elicit the morphosyntactic
features of a target language without the supervision of a linguist.
The corpus is composed of approximately 3200 English source sentences
that are then translated by a native speaker into the target language.
The design of our corpus was driven by our need to elicit morphosyntactic
language features without the supervision of a linguist. In a previous
paper we reported on a reverse Treebank and that was a deep
morphosyntactic tree with two parallel human language sentences. The
first is provided by reverse annotation and the second is acquired
through elicitation. This presentation will focus on the extent to which
we able to acquire our morphosyntactic information from our translated
corpora and the types of errors we encountered, both from the perspective
of the translator and the corpus itself.
Tuesday, April 17, 2007
Tuesday, February 20, 2007
Tuesday, January 16, 2007
Tuesday, November 21, 2006
Simulating Multiple Translations and ASR Transcripts for Applications in Multilingual Spoken Document Classification
Title: Simulating Multiple Translations and ASR Transcripts for Applications in Multilingual Spoken Document Classification
Speaker: Wei-Hao Lin from the Informedia group
Abstract:
We propose a statistical model to simulate multiple documents and
their translations (e.g. Chinese documents and their English
translations), and apply the model in the task of classifying
multilingual documents. The model, based on a frequency matching
principle, predicts that previous approaches to building classifiers
from a common language (e.g., English) are not optimal for
multilingual collections with unbalanced numbers of documents, and a
proposed multilingual representation can outperform the mono-lingual
bag-of-words representation. We also investigate the possibility of
combining multiple ASR transcripts and translations through
re-weighting. The validity of our model is strongly supported by
the close match between predictions of the simulation model and the
empirical results of classifying multilingual spoken documents from
broadcast news in three languages.
Speaker: Wei-Hao Lin from the Informedia group
Abstract:
We propose a statistical model to simulate multiple documents and
their translations (e.g. Chinese documents and their English
translations), and apply the model in the task of classifying
multilingual documents. The model, based on a frequency matching
principle, predicts that previous approaches to building classifiers
from a common language (e.g., English) are not optimal for
multilingual collections with unbalanced numbers of documents, and a
proposed multilingual representation can outperform the mono-lingual
bag-of-words representation. We also investigate the possibility of
combining multiple ASR transcripts and translations through
re-weighting. The validity of our model is strongly supported by
the close match between predictions of the simulation model and the
empirical results of classifying multilingual spoken documents from
broadcast news in three languages.
Tuesday, October 17, 2006
Coupling of ASR+MT: Initial Experiments & Future Directions
Speaker: Ian Lane
Title: Tighter Coupling of ASR+MT: Initial Experiments & Future Directions
Abstract:
In this talk, I will first give a brief overview of my PhD work entitled "Flexible Spoken Language Understanding based on Topic Classification and Domain Detection", and describe how the proposed approaches can be applied to applications other than speech-to-speech translation. I will
then describe my current work which focuses on improving coupling between ASR and Machine-Translation Systems, specifically, when applied to conversational speech. Finally, I will propose future directions for which I hope to receive a large amount of feedback.
Title: Tighter Coupling of ASR+MT: Initial Experiments & Future Directions
Abstract:
In this talk, I will first give a brief overview of my PhD work entitled "Flexible Spoken Language Understanding based on Topic Classification and Domain Detection", and describe how the proposed approaches can be applied to applications other than speech-to-speech translation. I will
then describe my current work which focuses on improving coupling between ASR and Machine-Translation Systems, specifically, when applied to conversational speech. Finally, I will propose future directions for which I hope to receive a large amount of feedback.
Tuesday, May 23, 2006
Anchor-Based Symmetric Probabilistic Alignment
Date: May 23, 2006
Presenter: Jae Dong Kim,
Title: Anchor-Based Symmetric Probabilistic Alignment
Presenter: Jae Dong Kim,
Title: Anchor-Based Symmetric Probabilistic Alignment
Tuesday, April 18, 2006
Can the Internet help improve Machine Translation?
Date: April 18, 2006
Presenter: Ari Font-Llitjos
Title: Can the Internet help improve Machine Translation?
Presenter: Ari Font-Llitjos
Title: Can the Internet help improve Machine Translation?
Subscribe to:
Posts (Atom)