Unsupervised word segmentation (UWS) can provide domain-adaptive segmentation for statistical machine translation (SMT) without annotated data, and bilingual UWS can even optimize segmentation for alignment. Monolingual UWS approaches of explicitly modeling the probabilities of words through Dirichlet process (DP) models or Pitman-Yor process (PYP) models have achieved high accuracy, but their bilingual counterparts have only been carried out on small corpora such as basic travel expression corpus (BTEC) due to the computational complexity. This paper proposes an efficient unified PYP-based monolingual and bilingual UWS method. Experimental results show that the proposed method is comparable to supervised segmenters on the in-domain NIST OpenMT corpus, and yields a 0.96 BLEU relative increase on NTCIR PatentMT corpus which is out-of-domain.
Many languages, especially Asian languages such as Chinese, Japanese and Myanmar, have no explicit word boundaries, thus word segmentation (WS), that is, segmenting the continuous texts of these languages into isolated words, is a prerequisite for many natural language processing applications including SMT.
Though supervised-learning approaches which involve training segmenters on manually segmented corpora are widely used [], yet the criteria for manually annotating words are arbitrary, and the available annotated corpora are limited in both quantity and genre variety. For example, in machine translation, there are various parallel corpora such as BTEC for tourism-related dialogues [] and PatentMT in the patent domain []11http://ntcir.nii.ac.jp/PatentMT, but researchers working on Chinese-related tasks often use the Stanford Chinese segmenter [] which is trained on a small amount of annotated news text.
In contrast, UWS, spurred by the findings that infants are able to use statistical cues to determine word boundaries [], relies on statistical criteria instead of manually crafted standards. UWS learns from unsegmented raw text, which are available in large quantities, and thus it has the potential to provide more accurate and adaptive segmentation than supervised approaches with less development effort being required.
The approaches of explicitly modeling the probability of words[] significantly outperformed a heuristic approach [] on the monolingual Chinese SIGHAN-MSR corpus [], which inspired the work of this paper.
However, bilingual approaches that model word probabilities suffer from computational complexity. Xu et al. [] proposed a bilingual method by adding alignment into the generative model, but was only able to test it on small-scale BTEC data. Nguyen et al. [] used the local best alignment to increase the speed of the Gibbs sampling in training but the impact on accuracy was not explored.
This paper is dedicated to bilingual UWS on large-scale corpora to support SMT. To this end, we model bilingual UWS under a similar framework with monolingual UWS in order to improve efficiency, and replace Gibbs sampling with expectation maximization (EM) in training.
We aware that variational bayes (VB) may be used for speeding up the training of DP-based or PYP-based bilingual UWS. However, VB requires formulating the expectations of -dimensional marginal distributions, where is the number of hidden variables. For UWS, the hidden variables are indicators that identify substrings of sentences in the corpus as words. These variables are large in number and it is not clear how to apply VB to UWS, and as far the authors aware there is no previous work related to the application of VB to monolingual UWS. Therefore, we have not explored VB methods in this paper, but we do show that our method is superior to the existing methods.
The contributions of this paper include,
state-of-the-art accuracy in monolingual UWS;
the first bilingual UWS method practical for large corpora;
improvement of BLEU scores compared to supervised Stanford Chinese word segmenter.
This section describes our unified monolingual and bilingual UWS scheme. Table 1 lists the main notation. The set is chosen to represent an unsegmented foreign language sentence (a sequence of characters), because an unsegmented sentence can be seen as the set of all possible segmentations of the sentence denoted , i.e. .
Notation | Meaning |
---|---|
an unsegmented foreign sentence | |
unsegmented substring of the un- | |
derlying string of from to | |
a segmented foreign sentence | |
the -th foreign word | |
monolingual segmentation model | |
probability of being a word ac- | |
cording to | |
a tokenized English sentence | |
the -th English word | |
(,) | a bilingual sentence pair |
bilingual segmentation model | |
probability of being a word ac- | |
cording to given |
Monolingual and bilingual WS can be formulated as follows, respectively,
(1) | ||||
(2) |
where is an alignment between and . The English sentence is used in the generation of a segmented sentence .
UWS learns models by maximizing the likelihood of the unsegmented corpus, formulated as,
(3) | ||||
(4) |
Our method of learning and proceeds in a similar manner to the EM algorithm. The following two operations are performed iteratively for each sentence (pair).
Exclude the previous expected counts of the current sentence (pair) from the model, and then derive the current sentence in all possible ways, calculating the new expected counts for the words (see Section 2.1), that is, we calculate the expected probabilities of the being words given the data excluding , i.e. in a similar manner to the marginalization in the Gibbs sampling process which we are replacing;
Update the respective model or according to these expectations (see Section2.2).
is the marginal probability of all the possible that contain as a word, which can be calculated efficiently through dynamic programming (the process is similar to the foreward-backward algorithm in training a hidden Markov model (HMM) []):
(5) |
where is the predefined maximum length of foreign language words, and are the forward and backward probabilities, respectively. This section uses a unigram model for description convenience, but the method can be extended to -gram models.
is the marginal probability of all the possible that contain as a word and are aligned with , formulated as:
(6) |
where and are the number of foreign and English words, respectively, and is the position of the English word that is aligned to in the alignment . For the alignment we employ an approximation to IBM model 2 [] described below.
We define the conditional probability of given the corresponding English sentence and the model as:
(7) |
Then, the previous dynamic programming method can be extended to the bilingual expectation
(8) |
Eq. 7 can be rewritten (as in IBM model 2):
(9) | ||||
In order to maintain both speed and accuracy, the following window function is adopted
(10) |
where is the number of characters in , and the -th character is the start of the word , since and are unknown during the computation of dynamic programming. is the window size, is the prior probability of an empty English word, and ensures all the items sum to 1.
Inspired by [], we employ a Pitman-Yor process model to build the segmentation model or . The monolingual model is
(11) |
where is a foreign language word, and is the observed counts of , is named the strength parameter, is named the base distribution of , and is the discount.
The bilingual model is
(12) |
The computational complexity of our method is linear in the number of iterations, the size of the corpus, and the complexity of calculating the expectations on each sentence or sentence pair. In practical applications, the size of the corpus is fixed, and we found empirically that the number of iterations required by the proposed method for convergence is usually small (less than five iterations). We now look in more detail at the complexity of the expectation calculation in monolingual and bilingual models.
The monolingual expectation is calculated according to Eq. 5; the complexity is linear in the length of sentences and the square of the predefined maximum length of words. Thus its overall complexity is
(15) |
where is the number of iterations, is the average number of characters per sentence, and is the predefined maximum length of words.
For the monolingual bigram model, the number of states in the HMM is times more than that of the monolingual unigram model, as the states at specific position of are not only related to the length of the current word, but also related to the length of the word before it. Thus its complexity is times the unigram model’s complexity:
(16) |
In this section, the proposed method is first validated on monolingual segmentation tasks, and then evaluated in the context of SMT to study whether the translation quality, measured by BLEU, can be improved.
Two monolingual corpora and two bilingual corpora are used (Table 2). CHILDES [] is the most common test corpus for UWS methods. The SIGHAN-MSR corpus [] consists of manually segmented simplified Chinese news text, released in the SIGHAN bakeoff 2005 shared tasks.
The first bilingual corpus: OpenMT06 was used in the NIST open machine translation 2006 Evaluation 22http://www.itl.nist.gov/iad/mig//tests/mt/2006/. We removed the United Nations corpus and the traditional Chinese data sets from the constraint training resources. The data sets of NIST Eval 2002 to 2005 were used as the development for MERT tuning []. This data set mainly consists of news text 33It also contains a small number of web blogs. PatentMT9 is from the shared task of NTCIR-9 patent machine translation . The training set consists of 1 million parallel sentences extracted from patent documents, and the development set and test set both consist of 2000 sentences.
Corpus | Type | # Sentences | # Characters |
---|---|---|---|
CHILDES | Mono. | 9,790 | 95,809 |
SIGHAN-MSR | Mono. | 90,903 | 4,234,824 |
OpenMT06 | Biling. | 437,004 | 19,692,605 |
PatentMT9 | Biling. | 1,004,000 | 63,130,757 |
For the monolingual tasks, the F score against the gold annotation is adopted to measure the accuracy. The results reported in related papers are listed for comparison.
For the bilingual tasks, the publicly available system of Moses [] with default settings is employed to perform machine translation, and BLEU [] was used to evaluate the quality. Character-based segmentation, LDC segmenter and Stanford Chinese segmenters were used as the baseline methods.
The parameters are tuned on held-out data sets. The maximum length of foreign language words is set to 4. For the PYP model, the base distribution adopts the formula in [], and the strength parameter is set to , and the discount is set to .
For bilingual segmentation,the size of the alignment window is set to ; the probability of foreign language words being generated by an empty English word, was set to .
The training was started from assuming that there was no previous segmentations on each sentence (pair), and the number of iterations was fixed. It was set to 3 for the monolingual unigram model, and 2 for the bilingual unigram model, which provided slightly higher BLEU scores on the development set than the other settings. The monolingual bigram model, however, was slower to converge, so we started it from the segmentations of the unigram model, and using 10 iterations.
In monolingual segmentation, the proposed methods with both unigram and bigram models were tested. Experimental results show that they are competitive to state-of-the-art baselines in both accuracy and speed (Table LABEL:tab:mono:acc). Note that the comparison of speed is only for reference because the times are obtained from their respective papers.
Method | Accuracy | Time | ||
CHILD. | MSR | CHILD. | MSR | |
NPY(bigram)\tnotea | 0.750 | 0.802 | 17 m | – |
NPY(trigram)\tnotea | 0.757 | 0.807 | – | – |
HDP(bigram)\tnoteb | 0.723 | – | 10 h | – |
Fitness\tnotec | – | 0.667 | – | – |
Prop.(unigram) | 0.729 | 0.804 | 3 s | 50 s |
Prop.(bigram) | 0.774 | 0.806 | 15 s | 2530 s |
by (Mochihashi et al.,2009);
by (Goldwater et al.,2009);
by (Zhao and Kit, 2008).
Table 4 presents the BLEU scores for Moses using different segmentation methods. Each experiment was performed three times. The proposed method with monolingual bigram model performed poorly on the Chinese monolingual segmentation task; thus, it was not tested. We intended to test [], but found it impracticable on large-scale corpora.
The experimental results show that the proposed UWS methods are comparable to the Stanford segmenters on the OpenMT06 corpus, while achieves a 0.96 BLEU increase on the PatentMT9 corpus. This is because this corpus is out-of-domain for the supervised segmenters. The CTB and PKU Stanford segmenter were both trained on annotated news text, which was the major domain of OpenMT06.
Method | BLEU | |
---|---|---|
OpenMT06 | PatentMT9 | |
Character | 29.50 0.03 | 28.36 0.09 |
LDC | 31.33 0.10 | 30.22 0.14 |
Stanford(CTB) | 31.68 0.25 | 30.77 0.13 |
Stanford(PKU) | 31.54 0.13 | 30.86 0.04 |
Prop.(mono.) | 31.47 0.18 | 31.62 0.06 |
Prop.(biling.) | 31.61 0.14 | 31.73 0.05 |
Table 5 presents the run times of the proposed methods on the bilingual corpora. The program is single threaded and implemented in C++. The time cost of the bilingual models is about 5 times that of the monolingual model, which is consistent with the complexity analysis in Section 3.
Method | Time | |||
---|---|---|---|---|
OpenMT06 | PatentMT9 | |||
Prop.(mono.) | 28 m | 1 h 01 m | ||
Prop.(biling.) | 2 h 25 m | 5 h 02 m |
This paper is devoted to large-scale Chinese UWS for SMT. An efficient unified monolingual and bilingual UWS method is proposed and applied to large-scale bilingual corpora.
Complexity analysis shows that our method is capable of scaling to large-scale corpora. This was verified by experiments on a corpus of 1-million sentence pairs on which traditional MCMC approaches would struggle [].
The proposed method does not require any annotated data, but the SMT system with it can achieve comparable performance compared to state-of-the-art supervised word segmenters trained on precious annotated data. Moreover, the proposed method yields 0.96 BLEU improvement relative to supervised word segmenters on an out-of-domain corpus. Thus, we believe that the proposed method would benefit SMT related to low-resource languages where annotated data are scare, and would also find application in domains that differ too greatly from the domains on which supervised word segmenters were trained.
In future research, we plan to improve the bilingual UWS through applying VB and integrating more accurate alignment models such as HMM models and IBM model 4.