We propose two polynomial time inference algorithms to compress sentences under bigram and dependency-factored objectives. The first algorithm is exact and requires running time. It extends Eisner’s cubic time parsing algorithm by using virtual dependency arcs to link deleted words. Two signatures are added to each span, indicating the number of deleted words and the rightmost kept word within the span. The second algorithm is a fast approximation of the first one. It relaxes the compression ratio constraint using Lagrangian relaxation, and thereby requires running time. Experimental results on the popular sentence compression corpus demonstrate the effectiveness and efficiency of our proposed approach.
Sentence compression aims to shorten a sentence by removing uninformative words to reduce reading time. It has been widely used in compressive summarization [9, 8, 10, 3, 15]. To make the compressed sentence readable, some techniques consider the n-gram language models of the compressed sentence [4, 12]. Recent studies used a subtree deletion model for compression [1, 13, 15], which deletes a word only if its modifier in the parse tree is deleted. Despite its empirical success, such a model fails to generate compressions that are not subject to the subtree constraint (see Figure 1). In fact, we parsed the Edinburgh sentence compression corpus using the MSTparser11http://sourceforge.net/projects/mstparser/, and found that of sentences () do not satisfy the subtree deletion model.
Methods beyond the subtree model are also explored. Trevor et al. proposed synchronous tree substitution grammar [5], which allows local distortion of the tree topology and can thus naturally capture structural mismatches. [7, 17] proposed the joint compression model, which simultaneously considers the n-gram model and dependency parse tree of the compressed sentence. However, the time complexity greatly increases since the parse tree dynamically depends on the compression. They used Integer Linear Programming (ILP) for inference which requires exponential running time in the worst case.
In this paper, we propose a new exact decoding algorithm for the joint model using dynamic programming. Our method extends Eisner’s cubic time parsing algorithm by adding signatures to each span, which indicate the number of deleted words and the rightmost kept word within the span, resulting in time complexity and space complexity. We further propose a faster approximate algorithm based on Lagrangian relaxation, which has running time and space complexity ( is the iteration number in the subgradient decent algorithm). Experiments on the popular Edinburgh dataset show that the proposed approach is 10 times faster than a high-performance commercial ILP solver.
We define the sentence compression task as: given a sentence composed of words, , and a length , we need to remove words from , so that the sum of the weights of the dependency tree and word bigrams of the remaining part is maximized. Formally, we solve the following optimization problem:
(1) | |||||
s.t. | |||||
where is a binary vector, indicates is kept or not. is a square matrix denoting the projective dependency parse tree over the remaining words, indicates if is the head of (note that each word has exactly one head). is the informativeness of , is the score of bigram in an n-gram model, is the score of dependency arc in an arc-factored dependency parsing model. Hence, the first part of the objective function is the total score of the kept words, the second and third parts are the scores of the parse tree and bigrams of the compressed sentence, indicates both and are kept, and are adjacent after compression. A graph illustration of the objective function is shown in Figure 2.
Throughout the paper, we assume that all the parse trees are projective. Our method is a generalization of Eisner’s dynamic programming algorithm [6], where two types of structures are used in each iteration, incomplete spans and complete spans. A span is a subtree over a number of consecutive words, with the leftmost or the rightmost word as its root. An incomplete span denoted as is a subtree inside a single arc , with root . A complete span is denoted as , where is the root of the subtree, and is the furthest descendant of .
Eisner’s algorithm searches the optimal tree in a bottom up order. In each step, it merges two adjacent spans into a larger one. There are two rules for merging spans: one merges two complete spans into an incomplete span, the other merges an incomplete span and a complete span into a large complete span.
First we consider an easy case, where the bigram scores in the objective function are ignored.
The scores of unigrams can be transfered to the dependency arcs, so that we can remove all linear terms from the objective function. That is:
This can be easily verifed. If , then in both equations, all terms having are zero; If , i.e., is kept, since it has exactly one head word in the compressed sentence, the sum of the terms having is for both equations.
Therefore, we only need to consider the scores of arcs. For any compressed sentence, we could augment its dependency tree by adding a virtual arc for each deleted word . If the first word is deleted, we connect it to the root of the parse tree , as shown in Figure 3. In this way, we derive a full parse tree of the original sentence. This is a one-to-one mapping. We can reversely get the the compressed parse tree by removing all virtual arcs from the full parse tree. We restrict the score of all the virtual arcs to be zero, so that scores of the two parse trees are equivalent.
Now the problem is to search the optimal full parse tree with virtual arcs.
We modify Eisner’s algorithm by adding a signature to each span indicating the number of virtual arcs within the span. Let and denote the incomplete and complete spans with virtual arcs respectively. When merging two spans, there are 4 cases, as shown in Figure 4.
Case 1 Link two complete spans by a virtual arc : .
The two complete spans must be single words, as the length of the virtual arc is 1.
Case 2 Link two complete spans by a non-virtual arc: .
Case 3 Merge an incomplete span and a complete span. The incomplete span is covered by a virtual arc: . The number of the virtual arcs within must be , since the descendants of the modifier of a virtual arc must be removed.
Case 4 Merge an incomplete span and a complete span. The incomplete span is covered by a non-virtual arc: .
The score of the new span is the sum of the two spans. For case 2, the weight of the dependency arc , is also added to the final score. The root node is allowed to have two modifiers: one is the modifier in the compressed sentence, the other is the first word if it is removed.
For each combination, the algorithm enumerates the number of virtual arcs in the left and right spans, and the split position (e.g., in case 2), thus it takes running time. The overall time complexity is and the space complexity is .
Next, we consider the bigram scores. The following proposition is obvious.
For any right-headed span or , , words must be kept.
Suppose is removed, there must be a virtual arc which is a conflict with the fact that is the leftmost word. As is a descendant of , must be kept. ∎
When merging two spans, a new bigram is created, which connects the rightmost kept words in the left span and the leftmost kept word in the right span. According to the proposition above, if the right span is right-headed, its leftmost word is kept. If the right span is left-headed, there are two cases: its leftmost word is kept, or no word in the span is kept. In any case, we only need to consider the leftmost word in the right span.
Let and denote the single and complete span with virtual arcs and the rightmost kept word . According to the proposition above, we have, for any right-headed span .
We slightly modify the two merging rules above, and obtain:
Case 2’ Link two complete spans by a non-virtual arc: . The score of the new span is the sum of the two spans plus .
Case 4’ Merge an incomplete span and a complete span. The incomplete span is covered by a non-virtual arc. For left-headed spans, the rule is , and the score of the new span is the sum of the two spans plus ; for right-headed spans, the rule is , and the score of the new span is the sum of the two spans.
The modified algorithm requires running time and space complexity.
In this section, we propose an approximate algorithm where the length constraint is relaxed by Lagrangian Relaxation. The relaxed version of Problem (1) is
(2) | |||||
s.t. | |||||
Fixing , the optimal can be found using a simpler version of the algorithm above. We drop the signature of the virtual arc number from each span, and thus obtain an time algorithm. Space complexity is . Fixing , the dual variable is updated by
where is the learning rate. In this paper, our choice of is the same as [16].
We evaluate our method on the data set from [4]. It includes 82 newswire articles with manually produced compression for each sentence. We use the same partitions as [10], i.e., 1,188 sentences for training and 441 for testing.
Our model is discriminative – the scores of the unigrams, bigrams and dependency arcs are the linear functions of features, that is, , where is the feature vector of , and is the weight vector of features. The learning task is to estimate the feature weight vector based on the manually compressed sentences.
We run a second order dependency parser trained on the English Penn Treebank corpus to generate the parse trees of the compressed sentences. Then we augment these parse trees by adding virtual arcs and get the full parse trees of their corresponding original sentences. In this way, the annoation is transformed into a set of sentences with their augmented parse trees. The learning task is similar to training a parser. We run a CRF based POS tagger to generate POS related features.
We adopt the compression evaluation metric as used in [10] that measures the macro F-measure for the retained unigrams (), and the one used in [4] that calculates the F1 score of the grammatical relations labeled by RASP [2].
We compare our method with other 4 state-of-the-art systems. The first is linear chain CRFs, where the compression task is casted as a binary sequence labeling problem. It usually achieves high unigram F1 score but low grammatical relation F1 score since it only considers the local interdependence between adjacent words. The second is the subtree deletion model [1] which is solved by integer linear programming (ILP)22We use Gurobi as the ILP solver in the paper. http://www.gurobi.com/. The third one is the bigram model proposed by McDonald [12] which adopts dynamic programming for efficient inference. The last one jointly infers tree structures alongside bigrams using ILP [17]. For fair comparison, systems were restricted to produce compressions that matched their average gold compression rate if possible.
Features for unigram |
, , , , |
, , , , |
, |
, , , |
, , |
whether is a stopword |
Features for selected bigram |
distance between the two words: |
, , , , |
, , , , |
Concatenation of the templates above |
Dependency Features for arc |
distance between the head and modifier |
dependency type |
direction of the dependency arc (left/right) |
, , , , |
, , , , |
, |
, |
Concatenation of the templates above |
lies between and |
Three types of features are used to learn our model: unigram features, bigram features and dependency features, as shown in Table 1. We also use the in-between features proposed by [11], which were shown to be very effective for dependency parsing.
System | C Rate | RASP | Sec. | ||
---|---|---|---|---|---|
Ours(Approx) | 0.802 | 0.598 | 0.056 | ||
Ours(Exact) | |||||
Subtree | |||||
TM13 | |||||
McDonald06 | |||||
CRFs |
We show the comparison results in Table 2. As expected, the joint models (ours and TM13) consistently outperform the subtree deletion model, since the joint models do not suffer from the subtree restriction. They also outperform McDonald’s, demonstrating the effectiveness of considering the grammar structure for compression. It is not surprising that CRFs achieve high unigram F scores but low syntactic F scores as they do not consider the fluency of the compressed sentence.
Compared with TM13’s system, our model with exact decoding is not significantly faster due to the high order of the time complexity. On the other hand, our approximate approach is much more efficient, about 10 times faster than TM13’ system, and achieves competitive accuracy with the exact approach. Note that it is worth pointing out that the exact approach can output compressed sentences of all lengths, whereas the approximate method can only output one sentence at a specific compression rate.
In this paper, we proposed two polynomial time decoding algorithms using joint inference for sentence compression. The first one is an exact dynamic programming algorithm, and requires running time. This one does not show significant advantage in speed over ILP. The second one is an approximation of the first algorithm. It adopts Lagrangian relaxation to eliminate the compression ratio constraint, yielding lower time complexity . In practice it achieves nearly the same accuracy as the exact one, but is much faster.33Our code is available at http://code.google.com/p/sent-compress/
The main assumption of our method is that the dependency parse tree is projective, which is not true for some other languages. In that case, our method is invalid, but [17] still works. In the future, we will study the non-projective cases based on the recent parsing techniques for 1-endpoint-crossing trees [14].
We thank three anonymous reviewers for their valuable comments. This work is partly supported by NSF award IIS-0845484 and DARPA under Contract No. FA8750-13-2-0041. Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.