Modern statistical dependency parsers assign lexical heads to punctuations as well as words. Punctuation parsing errors lead to low parsing accuracy on words. In this work, we propose an alternative approach to addressing punctuation in dependency parsing. Rather than assigning lexical heads to punctuations, we treat punctuations as properties of their neighbouring words, used as features to guide the parser to build the dependency graph. Integrating our method with an arc-standard parser yields a unlabelled attachment score, which is the best accuracy by a single-model transition-based parser reported so far.
The task of dependency parsing is to identify the lexical head of each of the tokens in a string. Modern statistical parsers [8, 10, 5, 14] treat all the tokens equally, assigning lexical heads to punctuations as well as words. Punctuations arguably play an important role in syntactic analysis. However, there are a number of reasons that it is not necessary to parse punctuations:
First, the lexical heads of punctuations are not as well defined as those of words. Consequently, punctuations are not as consistently annotated in treebanks as words, making it harder to parse punctuations. For example, modern statistical parsers achieve above unlabelled attachment score (UAS) on words. However, the UAS on punctuations are generally below .
Moreover, experimental results showed that parsing accuracy of content words drops on sentences which contain higher ratios of punctuations. One reason for this result is that projective dependency parsers satisfy the “no crossing links” constraint, and errors in punctuations may prevent correct word-word dependencies from being created (see section 2). In addition, punctuations cause certain type of features inaccurate. Take valency features for example, previous work [14] has shown that such features are important to parsing accuracy, e.g., it may inform the parser that a verb already has two objects attached to it. However, such information might be inaccurate when the verb’s modifiers contain punctuations.
Ultimately, it is the dependencies between words that provide useful information for real world applications. Take machine translation or information extraction for example, most systems take advantage of the head-modifier relationships between word pairs rather than word-punctuation pairs to make better predictions. The fact that most previous work evaluates parsing accuracies without taking punctuations into account is also largely due to this reason.
Given the above reasons, we propose an alternative approach to punctuation processing for dependency parsing. In this method, punctuations are not associated with lexical heads, but are treated as properties of their neighbouring words.
System | E-F | A-S | A-S-64 | MST |
---|---|---|---|---|
Dev UAS | 91.83 | 90.71 | 93.02 | 92.56 |
Test UAS | 91.75 | 90.34 | 92.84 | 92.10 |
Dev UAS-p | 83.20 | 79.69 | 84.80 | 84.42 |
Test UAS-p | 84.67 | 79.64 | 87.80 | 85.67 |
Dev UAS | 90.64 | 89.55 | 91.87 | 90.11 |
Test UAS | 90.40 | 89.33 | 91.75 | 89.82 |
Length | |||||||||
---|---|---|---|---|---|---|---|---|---|
Punc | |||||||||
E-F | |||||||||
A-S | 88.89 | ||||||||
A-S-64 | |||||||||
MST | 93.11 |
Our method is simple and can be easily incorporated into state-of-the-art parsers. In this work, we report results on an arc-standard transition-based parser. Experiments show that our method achieves about UAS improvement over the greedy baseline parser on the standard Penn Treebank test set. Although the improvement becomes smaller as the beam width grows larger, we still achieved UAS with a beam of width 64, which is the best result for transition-based parsers reported so far. Our code will be available at https://github.com/majineu/Parser/Punc/A-STD.
In this section, we conduct a set of experiments to show the influence of punctuations on dependency parsing accuracies.
We use the Wall Street Journal portion of the Penn Treebank with the standard splits: sections 02-21 are used as the training set; section 22 and section 23 are used as the development and test set, respectively. Penn2Malt is used to convert bracketed structures into dependencies. We use our own implementation of the Part-Of-Speech (POS) tagger proposed by Collins (2002) to tag the development and test sets. Training set POS tags are generated using 10-fold jack-knifing. Parsing accuracy is evaluated using unlabelled attachment score (UAS), which is the percentage of words that are assigned the correct lexical heads.
To show that the influence of punctuations on parsing is independent of specific parsing algorithms, we conduct experiments using three parsers, each representing a different parsing methodology: the open source MSTParser11We trained a second order labelled parser with all the configurations set to the default value. The code is publicly available at http://sourceforge.net/projects/mstparser/[9], our own re-implementation of an arc-standard transition-based parser [11], which is trained using global learning and beam-search [13] with a rich feature set [14] 22Some feature templates in Zhang and Nivre (2011) involve head word and head POS tags which are not available for an arc-standard parser. Interestingly, without those features our arc-standard parser still achieves UAS which is comparable to the UAS obtained by the arc-eager parser of Zhang and Nivre (2011), and our own re-implementation of the easy-first parser [4] with an extended feature set [7].
Our first experiment is to show that, compared with words, punctuations are more difficult to parse and to learn. To see this, we evaluate the parsing accuracies of the selected parsers on words and punctuations, separately. Results are listed in Table 1, where row 2 and row 3 list the UAS of words (all excluding punctuations) on the development and test set, respectively. Row 4 and row 5 list accuracies of punctuations (all excluding words) on the development and test set, respectively. We can see that although all the parsers achieve above UAS on words, the UAS on punctuations are mostly below .
As for learning, we calculate the percentage of parameter updates that are caused by associating punctuations with incorrect heads during training of the easy-first parser33For the greedy easy-first parser, whether a parameter update is caused by punctuation error can be determined with no ambiguity.. The result is that more than of the parameter updates are caused due to punctuations, though punctuations account for only of the total tokens in the training set.
The fact that parsers achieve low accuracies on punctuations is to some degree expected, because the head of a punctuation mark is linguistically less well-defined. However, a related problem is that parsing accuracy on words tends to drop on the sentences which contain high ratio of punctuations. To see this, we divide the sentences in the development set into sub-sets according the punctuation ratio (percentage of punctuations that a sentence contains), and then evaluate parsing accuracies on the sub-sets separately.
The results are listed in Table 2. Since long sentences are inherently more difficult to parse, to make a fair comparison, we further divide the development set according to sentence lengths as shown in the first row441694 out of 1700 sentences on the development set with length no larger than 60 tokens. We can see that most of the cases, parsing accuracies drop on sentences with higher punctuation ratios. Note that this negative effect on parsing accuracy might be overlooked since most previous work evaluates parsing accuracy without taking punctuations into account.
By inspecting the parser outputs, we found that error propagation caused by assigning incorrect head to punctuations is one of the main reason that leads to this result. Take the sentence shown in Figure 1 (a) for example, the word Mechanisms is a modifier of entitled according to the gold reference. However, if the quotation mark, “, is incorrectly recognized as a modifier of was, due to the “no crossing links” constraint, the arc between Mechanisms and entitled can never be created.
A natural question is whether it is possible to reduce such error propagation by simply removing all punctuations from parsing. Our next experiment aims at answering this question. In this experiment, we first remove all punctuations from the original data and then modify the dependency arcs accordingly in order to maintain word-word dependencies in the original data. We re-train the parsers on the modified training set and evaluate parsing accuracies on the modified data.
Results are listed in row 6 and row 7 of Table 1. We can see that parsing accuracies on the modified data drop significantly compared with that on the original data. The result indicates that by removing punctuations, we lose some information that is important for dependency parsing.
In our method, punctuations are treated as properties of its neighbouring words. Such properties are used as additional features to guide the parser to construct the dependency graph.
unigram | for in | |
---|---|---|
for in | ||
bigram | for in | |
for in | ||
for in |
Our method distinguishes paired punctuations from other punctuations. Here paired punctuations include brackets and quotations marks, whose Penn Treebank POS tags are the following four:
The characteristics of paired punctuations include: (1) they typically exist in pairs; (2) they serve as boundaries that there is only one dependency arc between the words inside the boundaries and the words outside. Take the sentence in Figure 1 (a) for example, the only arc cross the boundary is (Mechanisms, entitled) where entitled is the head.
To utilize such boundary information, we further classify paired punctuations into two categories: those that serve as the beginning of the boundary, whose POS tags are either -LRB- or “, denoted by Bpunc; and those that serve as the end of the boundary, denoted by Epunc.
Before parsing starts, a preprocessing step is used to first attach the paired punctuations as properties of their neighbouring words, and then remove them from the sentence. In particular, we attach Bpuncs to their right neighbours and Epuncs to their left neighbours, as shown in Figure 1 (b). Note that in Figure 1 (a), the left neighbour of ” is also a punctuation. In such cases, we simply remove these punctuations since the existence of paired punctuations already indicates that there should be a boundary.
During parsing, when a dependency arc with lexical head is created, the property of is updated by the property of its left (or right) most child to keep track whether there is a Bpunc (or Epunc) to the left (or right) side of the sub-tree rooted at , as shown in Figure 1 (c). When Bpuncs and Epuncs meet each other at , a Paired property is assigned to to capture that the words within the paired punctuations form a sub-tree, rooted at . See Figure 1 (d).
It is not uncommon that two Bpuncs appear adjacent to each other. For example,
(“Congress’s Environmental Buccaneers,” Sept. 18).
In our implementation, Bpunc or Epunc properties are implemented using flags. In the example, we set two flags “ and ( on the word Congrees’s. When Bpunc and Epunc meet each other, the corresponding flags are turned off. In the example, when Congrees’s is identified as a modifier of Buccaneers, the ” flag of Buccaneers is turned off. However, we do not assign a Paired property to Buccaneers since its ( flag is still on. The Paired property is assigned only when all the flags are turned off.
Though some types of non-paired punctuations may capture certain syntactic patterns, we do not make further distinctions between them, and treat these punctuations uniformly for simplicity.
Before parsing starts and after the preprocessing step for paired punctuations, our method employs a second preprocessing step to attach non-paired punctuations to their left neighbouring words. It is guaranteed that the property of the left neighbouring words of non-paired punctuations must be empty. Otherwise, it means the non-paired punctuation is adjacent to a paired punctuation. In such cases, the non-paired punctuation would be removed in the first processing step.
During parsing, non-paired punctuations are also passed bottom-up: the property of is updated by its right-most dependent to keep track whether there is a punctuation to the right side of the tree rooted at . The only special case is that if already contains a Bpunc property, then our method simply ignores the non-paired property since we maintain the boundary information with the highest priority.
We incorporate our method into the arc-standard transition-based parser, which uses a stack to maintain partially constructed trees and a buffer for the incoming words [11]. We design a set of features to exploit the potential of using punctuation properties for the arc-standard parser.
The feature templates are listed in Table 3. In addition to the features designed for paired punctuations, such as bigram punctuation features listed in line 3 of Table 3, we also design features for non-paired punctuations. For example, the distance features in line 5 of Table 3 is used to capture the pattern that if a word with comma property is the left modifier of a noun or a verb, the distance between and its lexical head is often larger than 1. In other words, they are not adjacent.
Our first experiment is to investigate the effect of processing paired punctuations on parsing accuracy. In this experiment, the method introduced in Section 3.1 is used to process paired punctuations, and the non-paired punctuations are left un-touched. Feature templates used in this experiment are those listed in the top three rows of Table 3 together with those used for the baseline arc-standard parser.
Results on the development set are shown in the second column of Table 4. We can see that when the beam width is set to 1, our method achieves an UAS improvement. By comparing the outputs of the two parsers, two types of errors made by the baseline parser are effectively corrected.
Baseline | Paired | All | |
---|---|---|---|
1 | 90.76 | 91.25 | 91.47 |
2 | 91.88 | 92.06 | 92.34 |
4 | 92.50 | 92.61 | 92.70 |
8 | 92.73 | 92.76 | 92.82 |
16 | 92.90 | 92.94 | 92.99 |
64 | 92.99 | 93.04 | 93.10 |
The first is that our method is able to capture the pattern that there is only one dependency arc between the words within the paired-punctuations and the words outside, while the baseline parser sometimes creates more dependency arcs that cross the boundary.
The second is more interesting. Our method is able to capture that the root, , of the sub-tree within the paired-punctuation, such as “Mechanisms” in Figure 1, generally serves as a modifier of the words outside, while the baseline parser occasionally make as the head of the sentence.
As we increase the beam width, the improvement of our method over the baseline becomes smaller. This is as expected, since beam search also has the effect of reducing error propagation [15], thereby alleviating the errors caused by punctuations.
In the last experiment, we examine the effect of incorporating all punctuations using the method introduced in Section 2. In this experiment, we use all the feature templates in Table 3 and those in the baseline parser. Results are listed in the fourth column of Table 4, which shows that parsing accuracies can be further improved by also processing non-paired punctuations. The overall accuracy improvement when the beam width is 1 reaches . The extra improvements mainly come from better accuracies on the sentences with comma. However, the exact type of errors that are corrected by using non-paired punctuations is more difficult to summarize.
The final results on the test set are listed in Table 555The number of training iteration is determined using the development set.. Table 5 also lists the accuracies of state-of-the-art transition-based parsers. In particular, “Huang 10” and “Zhang 11” denote Huang and Sagae (2010) and Zhang and Nivre (2011), respectively. “Bohnet 12” and “Choi 13” denote Bohnet and Nivre (2012) and Choi and Mccallum (2013), respectively. We can see that our method achieves the best accuracy for single-model transition-based parsers.
system | UAS | Comp | Root |
---|---|---|---|
Baseline | 90.38 | 37.71 | 89.45 |
All-Punc | 91.32 | 41.35 | 92.43 |
Baseline-64 | 92.84 | 46.90 | 95.57 |
All-Punc-64 | 93.06 | 48.55 | 95.53 |
Huang 10 | 92.10 | ||
Zhang 11 | 92.90 | 48.00 | 91.80 |
Choi 13 | 92.96 | ||
Bohnet 12 | 93.03 |
In this work, we proposed to treat punctuations as properties of context words for dependency parsing. Experiments with an arc-standard parser showed that our method effectively improves parsing performance and we achieved the best accuracy for single-model transition-based parser.
Regarding punctuation processing for dependency parsing, Li et al. (2010) proposed to utilize punctuations to segment sentences into small fragments and then parse the fragments separately. A similar approach is proposed by Spitkovsky et al. (2011) which also designed a set of constraints on the fragments to improve unsupervised dependency parsing.
We highly appreciate the anonymous reviewers for their insightful suggestions. This research was supported by the National Science Foundation of China (61272376; 61300097; 61100089), the Fundamental Research Funds for the Central Universities (N110404012), the research grant T2MOE1301 from Singapore Ministry of Education (MOE) and the start-up grant SRG ISTD2012038 from SUTD.