AT&T Interactive
Yue Zhang
Oxford University
Overview
Data-driven (statistical) approaches have been playing an increasingly prominent role in parsing since the 1990s. In recent years, there has been a growing interest in dependency-based as opposed to constituency-based approaches to syntactic parsing, with application to a wide range of research areas and different languages. Graph-based and transition-based methods are the two dominant data-driven approaches to dependency parsing. In a graph-based model, it defines a space of candidate dependency trees for a given sentence. Each candidate tree is scored via a local or global scoring function. The parser (usually uses dynamic programming) outputs the highest-scored tree. In contrast, in a transition-based model, it defines a transition system for mapping a sentence to its dependency tree. It induces a model for predicting the next state transition, given the transition history. Given the induced model, the output parse tree is built deterministically upon the construction of the optimal transition sequence.
Both Graph-based and transition-based approaches have been used to achieve state-of-the-art dependency parsing results for a wide range of languages. Some researchers have used the combination of the two models and it shows the performance of the combined model is significantly better than the individual models. Another recent trend is to apply online training to shift-reduce parsing in the transition-based models. In this tutorial, we first introduce the two main-stream data-driven dependency parsing models--- graph-based and transition-based models. After comparing the differences between them, we show how these two models can be combined in various ways to achieve better results.
Tutorial Outline
- Part A: Introduction to Dependency Parsing
- Part B: Graph-based Dependency Parsing Models
- Learning Algorithms (Local Learning vs. Global Learning)
- Parsing Algorithms (Dynamic Programming)
- Features (Static Features vs. Dynamic Features)
- Part C: Transition-based Dependency Parsing Models
- Learning Algorithms (Local Learning vs. online Learning)
- Parsing Algorithms (Shift-reduce Parsing)
- Features
- Part D: The Combined Models
- The stacking Method
- The ensemble Method
- Single-model Combination
- Part E: Other Recent Trends in Dependency Parsing
- Integer Linear Programming
- Fast Non-Projective Parsing
Instructor Bios
Qin Iris Wang
AT&T Interactive
qiniriswang@gmail.com
Qin Iris Wang is currently a Research Scientist at AT&T Interactive (San Francisco). Qin obtained her PhD in 2008 from the University of Alberta under Dekang Lin and Dale Schuurmans. Qin's research interests include NLP (in particular dependency parsing), machine learning, information retrieval, text mining and large scale data processing. Qin's PhD studies was focused on Learning Structured Classifiers for Statistical Dependency Parsing. Before joined AT&T, she was a research scientist at Yahoo Labs. Qin was a teaching assistant for two years during her PhD studies. In 2009, Qin organized a workshop on " Semi-supervised Learning for Natural Language Processing" at NAACL-HLT.
Yue Zhang
University of Oxford
yue.zhang@comlab.ox.ac.uk
Yue Zhang just defended his PhD thesis at the University of Oxford. Yue's research interests include natural language processing (word segmentation, parsing, machine translation), machine learning, etc. More specifically, his research area is the syntactic analysis of the Chinese language, using discriminative machine-learning approaches. He has worked on word segmentation, joint word segmentation and POS-tagging, phrase-structure parsing and dependency parsing. Yue worked on Chinese-English machine-translation during MSc studies in Oxford, and parallel computing during undergrad studies in Tsinghua University.