Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.
Citations have been categorized and studied for a half-century [15] to better understand when and how citations are used, and to record and measure how information is exchanged (e.g., networks of co-cited papers or authors [26]). Recently, the value of this information has been shown in practical applications such as information retrieval (IR) [25], summarization [24], and even identifying scientific breakthroughs [27]. We expect that by identifying and labeling the function of citations we can improve the effectiveness of these applications.
There has been no consensus on what aspects or functions of a citation should be annotated and how. Early citation classification focused more on citation motivation [16], while later classification considered more the citation function [9]. Recent studies using automatic classification have continued this tradition of introducing a new classification scheme with each new investigation into the use of citations [22, 29, 13, 1]. One distinction that has been more consistently annotated across recent citation classification studies is between positive and negative citations [3, 2, 1].11Dong and Schäfer (2011) also annotate polarity, which can be found in their dataset (described later), but this is not discussed in their paper. The popularity of this distinction likely owes to the prominence of sentiment analysis in NLP [20]. We follow much of the recent work on citation classification and concentrate on citation polarity.
By concentrating on citation polarity we are able to compare our classification to previous citation polarity work. This choice also allows us to access the wealth of existing data containing polarity annotation and then frame the task as a domain adaptation problem. Of course the risk in approaching the problem as domain adaptation is that the domains are so different that the representation of a positive instance of a movie or product review, for example, will not coincide with that of a positive scientific citation. On the other hand, because there is a limited amount of annotated citation data available, by leveraging large amounts of annotated polarity data we could potentially even improve citation classification.
We treat citation polarity classification as a sentiment analysis domain adaptation task and therefore must be careful not to define features that are too domain specific. Previous work in citation polarity classification focuses on finding new citation features to improve classification, borrowing a few from text classification in general (e.g., -grams), and perhaps others from sentiment analysis problems (e.g., the polarity lexicon from Wilson et al. (2005)). We would like to do as little feature engineering as possible to ensure that the features we use are meaningful across domains. However, we do still want features that somehow capture the inherent positivity or negativity of our labeled instances, i.e., citations or Amazon product reviews. Currently a popular approach for accomplishing this is to use deep learning neural networks [4], which have been shown to perform well on a variety of NLP tasks using only bag-of-word features [10]. More specifically related to our work, deep learning neural networks have been successfully employed for sentiment analysis [28] and for sentiment domain adaptation [17]. In this paper we examine one of these approaches, marginalized stacked denoising autoencoders (mSDA) from Chen et al. (2012), which has been successful in classifying the polarity of Amazon product reviews across product domains. Since mSDA achieved state-of-the-art performance in Amazon product domain adaptation, we are hopeful it will also be effective when switching to a more distant domain like scientific citations.
We are interested in domain adaptation for citation classification and therefore need a target dataset of citations and a non-citation source dataset. There are two corpora available that contain citation function annotation, the DFKI Citation Corpus [13] and the IMS Citation Corpus [19]. Both corpora have only about 2000 instances; unfortunately, there are no larger corpora available with citation annotation and this task would benefit from more annotated data. Due to the infrequent use of negative citations, a substantial annotation effort (annotating over 5 times more data) would be necessary to reach 1000 negative citation instances, which is the number of negative instances in a single domain in the multi-domain corpus described below.
The DFKI Citation Corpus22https://aclbib.opendfki.de/repos/trunk/citation_classification_dataset/ has been used for classifying citation function [13], but the dataset also includes polarity annotation. The dataset has 1768 citation sentences with polarity annotation: 190 are labeled as positive, 57 as negative, and the vast majority, 1521, are left neutral. The second citation corpus, the IMS Citation Corpus33http://www.ims.uni-stuttgart.de/~jochimcs/citation-classification/ contains 2008 annotated citations: 1836 are labeled positive and 172 are labeled negative. Jochim and Schütze (2012) use annotation labels from Moravcsik and Murugesan (1975) where positive instances are labeled confirmative, negative instances are labeled negational, and there is no neutral class. Because each of the citation corpora is of modest size we combine them to form one citation dataset, which we will refer to as CITD. The two citation corpora comprising CITD both come from the ACL Anthology [5]: the IMS corpus uses the ACL proceedings from 2004 and the DFKI corpus uses parts of the proceedings from 2007 and 2008. Since mSDA also makes use of large amounts of unlabeled data, we extend our CITD corpus with citations from the proceedings of the remaining years of the ACL, 1979–2003, 2005–2006, and 2009.
There are a number of non-citation corpora available that contain polarity annotation. For these experiments we use the Multi-Domain Sentiment Dataset44http://www.cs.jhu.edu/~mdredze/datasets/sentiment/ (henceforth MDSD), introduced by Blitzer et al. (2007). We use the version of the MDSD that includes positive and negative labels for product reviews taken from Amazon.com in the following domains: books, dvd, electronics, and kitchen. For each domain there are 1000 positive reviews and 1000 negative reviews that comprise the “labeled” data, and then roughly 4000 more reviews in the “unlabeled”55It is usually treated as unlabeled data even though it actually contains positive and negative labels, which have been used, e.g., in [8]. data. Reviews were preprocessed so that for each review you find a list of unigrams and bigrams with their frequency within the review. Unigrams from a stop list of 55 stop words are removed, but stop words in bigrams remain.
Table 1 shows the distribution of polarity labels in the corpora we use for our experiments. We combine the DFKI and IMS corpora into the CITD corpus. We omit the citations labeled neutral from the DFKI corpus because the IMS corpus does not contain neutral annotation nor does the MDSD. It is the case in many sentiment analysis corpora that only positive and negative instances are included, e.g., [23].
Corpus | Instances | Pos. | Neg. | Neut. |
---|---|---|---|---|
DFKI | 1768 | 190 | 57 | 1521 |
IMS | 2008 | 1836 | 172 | – |
MDSD | 27,677 | 13,882 | 13,795 | – |
The citation corpora presented above are both unbalanced and both have a highly skewed distribution. The MDSD on the other hand is evenly balanced and an effort was even made to keep the data treated as “unlabeled” rather balanced. For this reason, in line with previous work using MDSD, we balance the labeled portion of the CITD corpus. This is done by taking 179 unique negative sentences in the DFKI and IMS corpora and randomly selecting an equal number of positive sentences. The IMS corpus can have multiple labeled citations per sentence: there are 122 sentences containing the 172 negative citations from Table 1. The final CITD corpus comprises this balanced corpus of 358 labeled citation sentences plus another 22,093 unlabeled citation sentences.
In our experiments, we restrict our features to unigrams and bigrams from the product review or citation context (i.e., the sentence containing the citation). This follows previous studies in domain adaptation [6, 17]. Chen et al. (2012) achieve state-of-the-art results on MDSD by testing the 5000 and 30,000 most frequent unigram and bigram features.
Previous work in citation classification has largely focused on identifying new features for improving classification accuracy. A significant amount of effort goes into engineering new features, in particular for identifying cue phrases, e.g., [30, 13]. However, there seems to be little consensus on which features help most for this task. For example, Abu-Jbara et al. (2013) and Jochim and Schütze (2012) find the list of polar words from Wilson et al. (2005) to be useful, and neither study lists dependency relations as significant features. Athar (2011) on the other hand reported significant improvement using dependency relation features and found that the same list of polar words slightly hurt classification accuracy. The classifiers and implementation of features varies between these studies, but the problem remains that there seems to be no clear set of features for citation polarity classification.
The lack of consensus on the most useful citation polarity features coupled with the recent success of deep learning neural networks [10] further motivate our choice to limit our features to the -grams available in the product review or citation context and not rely on external resources or tools for additional features.
For classification we use marginalized stacked denoising autoencoders (mSDA) from Chen et al. (2012)66We use their MATLAB implementation available at http://www.cse.wustl.edu/~mchen/code/mSDA.tar. plus a linear SVM. mSDA takes the concept of denoising – introducing noise to make the autoencoder more robust – from Vincent et al. (2008), but does the optimization in closed form, thereby avoiding iterating over the input vector to stochastically introduce noise. The result of this is faster run times and currently state-of-the-art performance on MDSD, which makes it a good choice for our domain adaptation task. The mSDA implementation comes with LIBSVM, which we replace with LIBLINEAR [14] for faster run times with no decrease in accuracy. LIBLINEAR, with default settings, also serves as our baseline.
Our initial experiments simply extend those of Chen et al. (2012) (and others who have used MDSD) by adding another domain, citations. We train on each of the domains from the MDSD – books, dvd, electronics, and kitchen – and test on the citation data. We split the labeled data 80/20 following Blitzer et al. (2007) (cf. Chen et al. (2012) train on all “labeled” data and test on the “unlabeled” data). These experiments should help answer two questions: does a larger amount of training data, even if out of domain, improve citation classification; and how well do the different product domains generalize to citations (i.e., which domains are most similar to citations)?
In contrast to previous work using MDSD, a lot of the work in domain adaptation also leverages a small amount of labeled target data. In our second set of experiments, we follow the domain adaptation approaches described in [12] and train on product review and citation data before testing on citations.
Domain | Baseline | All | Weight | Pred | LinInt | Augment | mSDA |
---|---|---|---|---|---|---|---|
books | 54.5 | 54.8 | 52.0 | 51.9 | 53.4 | 53.4 | 57.1 |
dvd | 53.2 | 50.9 | 56.0 | 53.4 | 51.9 | 47.5 | 51.6 |
electronics | 53.4 | 49.0 | 50.5 | 53.4 | 54.8 | 51.9 | 59.2 |
kitchen | 47.9 | 48.8 | 50.7 | 53.4 | 52.6 | 49.2 | 50.1 |
citations | 51.9 | – | – | – | – | – | 54.9 |
Our initial results show that using mSDA for domain adaptation to citations actually outperforms in-domain classification. In Figure 1 we compare citation classification with mSDA to the SVM baseline. Each pair of vertical bars represents training on a domain from MDSD (e.g., books) and testing on CITD. The dark gray bar indicates the scores for the SVM baseline using the 30,000 features and the lighter gray bar shows the mSDA results. The black horizontal line indicates the score for in-domain citation classification, which sometimes represents the goal for domain adaptation. We can see that using a larger dataset, even if out of domain, does improve citation classification. For books, dvd, and electronics, even the SVM baseline improves on in-domain classification. mSDA does better than the baseline for all domains except dvd. Using a larger training set, along with mSDA, which makes use of the unlabeled data, leads to the best results for citation classification.
In domain adaptation we would expect the domains most similar to the target to lead to the highest results. Like Dai et al. (2007), we measure the Kullback-Leibler divergence between the source and target domains’ distributions. According to this measure, citations are most similar to the books domain. Therefore, it is not surprising that training on books performs well on citations, and intuitively, among the domains in the Amazon dataset, a book review is most similar to a scientific citation. This makes the good mSDA results for electronics a bit more surprising.
The results in Section 4.1 are for semi-supervised domain adaptation: the case where we have some large annotated corpus (Amazon product reviews) and a large unannotated corpus (citations). There have been a number of other successful attempts at fully supervised domain adaptation, where it is assumed that some small amount of data is annotated in the target domain [7, 12, 18]. To see how mSDA compares to supervised domain adaptation we take the various approaches presented by Daumé III (2007). The results of this comparison can be seen in Table 2. Briefly, “All” trains on source and target data; “Weight” is the same as “All” except that instances may be weighted differently based on their domain (weights are chosen on a development set); “Pred” trains on the source data, makes predictions on the target data, and then trains on the target data with the predictions; “LinInt” linearly interpolates predictions using the source-only and target-only models (the interpolation parameter is chosen on a development set); “Augment” uses a larger feature set with source-specific and target-specific copies of features; see [12] for further details.
We are only interested in citations as the target domain. Daumé’s source-only baseline corresponds to the “Baseline” column for domains: books, dvd, electronics, and kitchen; while his target-only baseline can be seen for citations in the last row of the “Baseline” column in Table 2.
The semi-supervised mSDA performs quite well with respect to the fully supervised approaches, obtaining the best results for books and electronics, which are also the highest scores overall. Weight and Pred have the highest scores for dvd and kitchen respectively. Daumé III (2007) noted that the “Augment” algorithm performed best when the target-only results were better than the source-only results. When this was not the case in his experiments, i.e., for the treebank chunking task, both Weight and Pred were among the best approaches. In our experiments, training on source-only outperforms target-only, with the exception of the kitchen domain.
We have included the line for citations to see the results training only on the target data () and to see the improvement when using all of the unlabeled data with mSDA ().
These results are very promising. Although they are not quite as high as other published results for citation polarity [1]77Their work included a CRF model to identify the citation context that gave them an increase of 9.2 percent over a single sentence citation context. Our approach achieves similar macro- on only the citation sentence, but using a different corpus. , we have shown that you can improve citation polarity classification by leveraging large amounts of annotated data from other domains and using a simple set of features.
mSDA and fully supervised approaches can also be straightforwardly combined. We do not present those results here due to space constraints. The combination led to mixed results: adding mSDA to the supervised approaches tended to improve over those approaches but results never exceeded the top mSDA numbers in Table 2.
Teufel et al. (2006b) introduced automatic citation function classification, with classes that could be grouped as positive, negative, and neutral. They relied in part on a manually compiled list of cue phrases that cannot easily be transferred to other classification schemes or other scientific domains. Athar (2011) followed this and was the first to specifically target polarity classification on scientific citations. He found that dependency tuples contributed the most significant improvement in results. Abu-Jbara et al. (2013) also looks at both citation function and citation polarity. A big contribution of this work is that they also train a CRF sequence tagger to find the citation context, which significantly improves results over using only the citing sentence. Their feature analysis indicates that lexicons for negation, speculation, and polarity were most important for improving polarity classification.
Robust citation classification has been hindered by the relative lack of annotated data. In this paper we successfully use a large, out-of-domain, annotated corpus to improve the citation polarity classification. Our approach uses a deep learning neural network for domain adaptation with labeled out-of-domain data and unlabeled in-domain data. This semi-supervised domain adaptation approach outperforms the in-domain citation polarity classification and other fully supervised domain adaptation approaches.
Acknowledgments. We thank the DFG for funding this work (SPP 1335 Scalable Visual Analytics).