Learning Non-linear Features for Machine Translation Using Gradient Boosting Machines
Kristina Toutanova and Byunggyu Ahn
The 51st Annual Meeting of the Association for Computational Linguistics - Short Papers (ACL Short Papers 2013)
Sofia, Bulgaria, August 4-9, 2013
Abstract
In this paper we show how to automatically induce non-linear features for machine translation. The new features decompose on the level of local phrases, which guarantees that the asymptotic complexity of machine translation decoding does not increase. We achieve this by applying gradient boosting machines to learn new weak learners (features) in the form of regression trees, using a differentiable loss function related to BLEU. Our results indicate that small gains in performance can be achieved using this method but we do not see the dramatic gains observed using feature induction for other important machine learning tasks.
START
Conference Manager (V2.61.0 - Rev. 2792M)