A Tale about PRO and Monsters
Preslav Nakov, Francisco Guzman and Stephan Vogel
The 51st Annual Meeting of the Association for Computational Linguistics - Short Papers (ACL Short Papers 2013)
Sofia, Bulgaria, August 4-9, 2013
Abstract
While exploring tuning on long sentences, we made an unexpected discovery: that PRO can fall victim to monsters – overly long negative examples with very low BLEU+1 scores, which are unsuitable for learning; as a result, testing BLEU drops by 3 points absolute. We propose several efficient ways to address the problem, using length- and BLEU+1-based cut-offs, outlier filters, stochastic sampling, and random acceptance. The best of these fixes not only slay and protect against monsters, but also yield higher stability for PRO as well as improved test-time BLEU scores. Thus, we recommend them to anybody using PRO, monster-believer or not.
START
Conference Manager (V2.61.0 - Rev. 2792M)