The First International Joint Conference on Natural Language Processing (IJCNLP-04) | ||||||||||
|
|
Workshop2 is now cancelled. Please read here for details. Website: http://www.isi.edu/~cyl/msqa-eval-ijcnlp04/ Automatic summarization and question answering (QA) are now enjoying a period of revival and they are advancing at a much quicker pace than before. Recently in the United States, TREC started an English QA track in 1999 and DUC sponsored by NIST also started a new English summarization evaluation series in 2001. In Japan, NTCIR project included Japanese text summarization task in 2000 and QA task in 2001. One major challenge of these large scale evaluation efforts is how we can evaluate summarization and QA systems systematically and automatically. In other words, is there a consistent and principled way in estimating the quality of any summarization and QA system accurately and can we automate the evaluation process? The release of the "Framework for Machine Translation Evaluation in ISLE (FEMTI)" and the recent adoption of the automatic evaluation metric, BLEU, in the machine translation community are good examples that we might be able to find leverage from and extend them to summarization and QA evaluations. This workshop focuses on automatic summarization and QA, and enable participants to discuss the integration of multiple languages and multiple functions and most importantly how to robustly estimate quality of summarization and QA.
Organizers Chin-Yew Lin (USC/ISI, Los Angeles)
Paper submission deadline: January 19, 2004
(Extended)
|
|