Online Learning in Tensor Space

Yuan Cao   Sanjeev Khudanpur
Center for Language & Speech Processing and Human Language Technology Center of Excellence
The Johns Hopkins University
Baltimore, MD, USA, 21218
{yuan.cao, khudanpur}@jhu.edu
Abstract

We propose an online learning algorithm based on tensor-space models. A tensor-space model represents data in a compact way, and via rank-1 approximation the weight tensor can be made highly structured, resulting in a significantly smaller number of free parameters to be estimated than in comparable vector-space models. This regularizes the model complexity and makes the tensor model highly effective in situations where a large feature set is defined but very limited resources are available for training. We apply with the proposed algorithm to a parsing task, and show that even with very little training data the learning algorithm based on a tensor model performs well, and gives significantly better results than standard learning algorithms based on traditional vector-space models.

\DeclarePairedDelimiter\ceil⌈⌉\DeclarePairedDelimiter\floor⌊⌋

1 Introduction

Many NLP applications use models that try to incorporate a large number of linguistic features so that as much human knowledge of language can be brought to bear on the (prediction) task as possible. This also makes training the model parameters a challenging problem, since the amount of labeled training data is usually small compared to the size of feature sets: the feature weights cannot be estimated reliably.

Most traditional models are linear models, in the sense that both the features of the data and model parameters are represented as vectors in a vector space. Many learning algorithms applied to NLP problems, such as the Perceptron [Collins2002], MIRA [Crammer et al.2006, McDonald et al.2005, Chiang et al.2008], PRO [Hopkins and May2011], RAMPION [Gimpel and Smith2012] etc., are based on vector-space models. Such models require learning individual feature weights directly, so that the number of parameters to be estimated is identical to the size of the feature set. When millions of features are used but the amount of labeled data is limited, it can be difficult to precisely estimate each feature weight.

In this paper, we shift the model from vector-space to tensor-space. Data can be represented in a compact and structured way using tensors as containers. Tensor representations have been applied to computer vision problems [Hazan et al.2005, Shashua and Hazan2005] and information retrieval [Cai et al.2006a] a long time ago. More recently, it has also been applied to parsing [Cohen and Collins2012, Cohen and Satta2013] and semantic analysis [Van de Cruys et al.2013]. A linear tensor model represents both features and weights in tensor-space, hence the weight tensor can be factorized and approximated by a linear sum of rank-1 tensors. This low-rank approximation imposes structural constraints on the feature weights and can be regarded as a form of regularization. With this representation, we no longer need to estimate individual feature weights directly but only a small number of “bases” instead. This property makes the the tensor model very effective when training a large number of feature weights in a low-resource environment. On the other hand, tensor models have many more degrees of “design freedom” than vector space models. While this makes them very flexible, it also creates much difficulty in designing an optimal tensor structure for a given training set.

We give detailed description of the tensor space model in Section 2. Several issues that come with the tensor model construction are addressed in Section 3. A tensor weight learning algorithm is then proposed in 4. Finally we give our experimental results on a parsing task and analysis in Section 5.

2 Tensor Space Representation

Most of the learning algorithms for NLP problems are based on vector space models, which represent data as vectors ϕn, and try to learn feature weight vectors 𝒘n such that a linear model y=𝒘ϕ is able to discriminate between, say, good and bad hypotheses. While this is a natural way of representing data, it is not the only choice. Below, we reformulate the model from vector to tensor space.

2.1 Tensor Space Model

A tensor is a multidimensional array, and is a generalization of commonly used algebraic objects such as vectors and matrices. Specifically, a vector is a 1st order tensor, a matrix is a 2nd order tensor, and data organized as a rectangular cuboid is a 3rd order tensor etc. In general, a Dth order tensor is represented as 𝒯n1×n2×nD, and an entry in 𝒯 is denoted by 𝒯i1,i2,,iD. Different dimensions of a tensor 1,2,,D are named modes of the tensor.

Using a Dth order tensor as container, we can assign each feature of the task a D-dimensional index in the tensor and represent the data as tensors. Of course, shifting from a vector to a tensor representation entails several additional degrees of freedom, e.g., the order D of the tensor and the sizes {nd}d=1D of the modes, which must be addressed when selecting a tensor model. This will be done in Section 3.

2.2 Tensor Decomposition

Just as a matrix can be decomposed as a linear combination of several rank-1 matrices via SVD, tensors also admit decompositions11The form of tensor decomposition defined here is named as CANDECOMP/PARAFAC(CP) decomposition [Kolda and Bader2009]. Another popular form of tensor decomposition is called Tucker decomposition, which decomposes a tensor into a core tensor multiplied by a matrix along each mode. We focus only on the CP decomposition in this paper. into linear combinations of “rank-1” tensors. A Dth order tensor 𝒜n1×n2×nD is rank-1 if it can be written as the outer product of D vectors, i.e.

𝒜=𝐚1𝐚2,,𝐚D,

where 𝐚ind,1dD. A Dth order tensor 𝒯n1×n2×nD can be factorized into a sum of component rank-1 tensors as

𝒯=r=1R𝒜r=r=1R𝐚r1𝐚r2,,𝐚rD

where R, called the rank of the tensor, is the minimum number of rank-1 tensors whose sum equals 𝒯. Via decomposition, one may approximate a tensor by the sum of H major rank-1 tensors with HR.

2.3 Linear Tensor Model

In tensor space, a linear model may be written (ignoring a bias term) as

f(𝑾)=𝑾𝚽,

where 𝚽n1×n2×nD is the feature tensor, 𝑾 is the corresponding weight tensor, and denotes the Hadamard product. If 𝑾 is further decomposed as the sum of H major component rank-1 tensors, i.e. 𝑾h=1H𝒘h1𝒘h2,,𝒘hD, then

f(𝒘11,,𝒘1D,,𝒘h1,,𝒘hD) (1)
= h=1H𝚽×1𝒘h1×2𝒘h2×D𝒘hD, (2)

where ×l is the l-mode product operator between a Dth order tensor 𝒯 and a vector 𝐚 of dimension nd, yielding a (D-1)th order tensor such that

(𝒯×l𝐚)i1,,il-1,il+1,,iD
= il=1nd𝒯i1,,il-1,il,il+1,,iDail.

The linear tensor model is illustrated in Figure 1.

Figure 1: A 3rd order linear tensor model. The feature weight tensor 𝑾 can be decomposed as the sum of a sequence of rank-1 component tensors.

2.4 Why Learning in Tensor Space?

So what is the advantage of learning with a tensor model instead of a vector model? Consider the case where we have defined 1,000,000 features for our task. A vector space linear model requires estimating 1,000,000 free parameters. However if we use a 2nd order tensor model, organize the features into a 1000×1000 matrix 𝚽, and use just one rank-1 matrix to approximate the weight tensor, then the linear model becomes

f(𝒘1,𝒘2)=𝒘1T𝚽𝒘2,

where 𝒘1,𝒘21000. That is to say, now we only need to estimate 2000 parameters!

In general, if V features are defined for a learning problem, and we (i) organize the feature set as a tensor 𝚽n1×n2×nD and (ii) use H component rank-1 tensors to approximate the corresponding target weight tensor. Then the total number of parameters to be learned for this tensor model is Hd=1Dnd, which is usually much smaller than V=d=1Dnd for a traditional vector space model. Therefore we expect the tensor model to be more effective in a low-resource training environment.

Specifically, a vector space model assumes each feature weight to be a “free” parameter, and estimating them reliably could therefore be hard when training data are not sufficient or the feature set is huge. By contrast, a linear tensor model only needs to learn Hd=1Dnd “bases” of the m feature weights instead of individual weights directly. The weight corresponding to the feature Φi1,i2,,iD in the tensor model is expressed as

wi1,i2,,iD = h=1Hwh,i11wh,i22wh,iDD, (3)

where wh,ijj is the ijth element in the vector 𝒘hj.

In other words, a true feature weight is now approximated by a set of bases. This reminds us of the well-known low-rank matrix approximation of images via SVD, and we are applying similar techniques to approximate target feature weights, which is made possible only after we shift from vector to tensor space models.

This approximation can be treated as a form of model regularization, since the weight tensor is represented in a constrained form and made highly structured via the rank-1 tensor approximation. Of course, as we reduce the model complexity, e.g. by choosing a smaller and smaller H, the model’s expressive ability is weakened at the same time. We will elaborate on this point in Section 3.1.

3 Tensor Model Construction

To apply a tensor model, we first need to convert the feature vector into a tensor 𝚽. Once the structure of 𝚽 is determined, the structure of 𝑾 is fixed as well. As mentioned in Section 2.1, a tensor model has many more degrees of “design freedom” than a vector model, which makes the problem of finding a good tensor structure a nontrivial one.

3.1 Tensor Order

The order of a tensor affects the model in two ways: the expressiveness of the model and the number of parameters to be estimated. We assume H=1 in the analysis below, noting that one can always add as many rank-1 component tensors as needed to approximate a tensor with arbitrary precision.

Obviously, the 1st order tensor (vector) model is the most expressive, since it is structureless and any arbitrary set of numbers can always be represented exactly as a vector. The 2nd order rank-1 tensor (rank-1 matrix) is less expressive because not every set of numbers can be organized into a rank-1 matrix. In general, a Dth order rank-1 tensor is more expressive than a (D+1)th order rank-1 tensor, as a lower-order tensor imposes less structural constraints on the set of numbers it can express. We formally state this fact as follows:

Theorem 1.

A set of real numbers that can be represented by a (D+1)th order tensor 𝒬 can also be represented by a Dth order tensor 𝒫, provided 𝒫 and 𝒬 have the same volume. But the reverse is not true.

Proof.

See appendix. ∎

On the other hand, tensor order also affects the number of parameters to be trained. Assuming that a Dth order has equal size on each mode (we will elaborate on this point in Section 3.2) and the volume (number of entries) of the tensor is fixed as V, then the total number of parameters of the model is DV1D. This is a convex function of D, and the minimum22The optimal integer solution can be determined simply by comparing the two function values. is reached at either D=\floorlnV or D=\ceillnV.

Therefore, as D increases from 1 to D*, we lose more and more of the expressive power of the model but reduce the number of parameters to be trained. However it would be a bad idea to choose a D beyond D*. The optimal tensor order depends on the nature of the actual problem, and we tune this hyper-parameter on a held-out set.

3.2 Mode Size

The size nd of each tensor mode, d=1,,D, determines the structure of feature weights a tensor model can precisely represent, as well as the number of parameters to estimate (we also assume H=1 in the analysis below). For example, if the tensor order is 2 and the volume V is 12, then we can either choose n1=3,n2=4 or n1=2,n2=6. For n1=3,n2=4, the numbers that can be precisely represented are divided into 3 groups, each having 4 numbers, that are scaled versions of one another. Similarly for n1=2,n2=6, the numbers can be divided into 2 groups with different scales. Obviously, the two possible choices of (n1,n2) also lead to different numbers of free parameters (7 vs. 8).

Given D and V, there are many possible combinations of nd,d=1,,D, and the optimal combination should indeed be determined by the structure of target features weights. However it is hard to know the structure of target feature weights before learning, and it would be impractical to try every possible combination of mode sizes, therefore we choose the criterion of determining the mode sizes as minimization of the total number of parameters, namely we solve the problem:

minn1,,nDd=1Dnds.td=1Dnd=V

The optimal solution is reached when n1=n2==nD=V1D. Of course it is not guaranteed that V1D is an integer, therefore we choose nd=\floorV1D or \ceilV1D,d=1,,D such that d=1DndV and [d=1Dnd]-V is minimized. The [d=1Dnd]-V extra entries of the tensor correspond to no features and are used just for padding. Since for each nd there are only two possible values to choose, we can simply enumerate all the possible 2D (which is usually a small number) combinations of values and pick the one that matches the conditions given above. This way n1,,nD are fully determined.

Here we are only following the principle of minimizing the parameter number. While this strategy might work well with small amount of training data, it is not guaranteed to be the best strategy in all cases, especially when more data is available we might want to increase the number of parameters, making the model more complex so that the data can be more precisely modeled. Ideally the mode size needs to be adaptive to the amount of training data as well as the property of target weights. A theoretically guaranteed optimal approach to determining the mode sizes remains an open problem, and will be explored in our future work.

3.3 Number of Rank-1 Tensors

The impact of using H>1 rank-1 tensors is obvious: a larger H increases the model complexity and makes the model more expressive, since we are able to approximate target weight tensor with smaller error. As a trade-off, the number of parameters and training complexity will be increased. To find out the optimal value of H for a given problem, we tune this hyper-parameter too on a held-out set.

3.4 Vector to Tensor Mapping

Finally, we need to find a way to map the original feature vector to a tensor, i.e. to associate each feature with an index in the tensor. Assuming the tensor volume V is the same as the number of features, then there are in all V! ways of mapping, which is an intractable number of possibilities even for modest sized feature sets, making it impractical to carry out a brute force search. However while we are doing the mapping, we hope to arrange the features in a way such that the corresponding target weight tensor has approximately a low-rank structure, this way it can be well approximated by very few component rank-1 tensors.

Unfortunately we have no knowledge about the target weights in advance, since that is what we need to learn after all. As a way out, we first run a simple vector-model based learning algorithm (say the Perceptron) on the training data and estimate a weight vector, which serves as a “surrogate” weight vector. We then use this surrogate vector to guide the design of the mapping. Ideally we hope to find a permutation of the surrogate weights to map to a tensor in such a way that the tensor has a rank as low as possible. However matrix rank minimization is in general a hard problem [Fazel2002]. Therefore, we follow an approximate algorithm given in Figure 4, whose main idea is illustrated via an example in Figure 4.

{subfigure}

0.48 {algorithmic} \REQUIRE\STATETensor order D, tensor volume V, mode size nd,d=1,,D, surrogate weight vector 𝒗 \STATELet \STATE𝒗+=[v1+,,vp+] be the non-negative part of 𝒗 \STATE𝒗-=[v1-,,vq-] be the negative part of 𝒗 \REQUIRE\STATE𝒗~+ = sort(𝒗+) in descending order \STATE𝒗~- = sort(𝒗-) in ascending order \STATEu=V/nD \STATEe=p-mod(p,u), f=q-mod(q,u) \STATEConstruct vector \STATE𝐗=[v~1+,,v~e+,v~1-,,v~f-,
v~e+1+,,v~p+,v~f+1-,,v~q-] \STATEMap Xa,a=1,,p+q to the tensor entry 𝒯i1,,iD, such that \STATE

a=d=1D(id-1)ld-1+1
\STATE

where ld=ld-1nd, and l0=1

Figure 2: Mapping a surrogate weight vector to a tensor
{subfigure}

0.5

Figure 3: Illustration of the algorithm
Figure 4: Algorithm for mapping a surrogate weight vector X to a tensor. (4) provides the algorithm; (4) illustrates it by mapping a vector of length V=12 to a (n1,n2,n3)=(2,2,3) tensor. The bars Xi represent the surrogate weights — after separately sorting the positive and negative parts — and the labels along a path of the tree correspond to the tensor-index of the weight represented by the leaf resulting from the mapping.

Basically, what the algorithm does is to divide the surrogate weights into hierarchical groups such that groups on the same level are approximately proportional to each other. Using these groups as units we are able to “fill” the tensor in a hierarchical way. The resulting tensor will have an approximate low-rank structure, provided that the sorted feature weights have roughly group-wise proportional relations.

For comparison, we also experimented a trivial solution which maps each entry of the feature tensor to the tensor just in sequential order, namely ϕ0 is mapped to Φ0,0,,0, ϕ1 is mapped to Φ0,0,,1 etc. This of course ignores correlation between features since the original feature order in the vector could be totally meaningless, and this strategy is not expected to be a good solution for vector to tensor mapping.

4 Online Learning Algorithm

We now turn to the problem of learning the feature weight tensor. Here we propose an online learning algorithm similar to MIRA but modified to accommodate tensor models.

Let the model be f(𝑻)=𝑻𝚽(x,y), where 𝑻=h=1H𝒘h1𝒘h2,,𝒘hD is the weight tensor, 𝚽(x,y) is the feature tensor for an input-output pair (x,y). Training samples (xi,yi),i=1,,m, where xi is the input and yi is the reference or oracle hypothesis, are fed to the weight learning algorithm in sequential order. A prediction zt is made by the model Tt at time t from a set of candidates 𝒵(xt), and the model updates the weight tensor by solving the following problem:

min𝑻n1×n2×nD12𝑻-𝑻t2+Cξ (4)
s.t.
tξ,ξ0

where 𝑻 is a decomposed weight tensor and

t=𝑻𝚽(xt,zt)-𝑻𝚽(xt,yt)+ρ(yt,zt)

is the structured hinge loss.

This problem setting follows the same “passive-aggressive” strategy as in the original MIRA. To optimize the vectors 𝒘hd,h=1,,H,d=1,,D, we use a similar iterative strategy as proposed in [Cai et al.2006b]. Basically, the idea is that instead of optimizing 𝒘hd all together, we optimize 𝒘11,𝒘12,,𝒘HD in turn. While we are updating one vector, the rest are fixed. For the problem setting given above, each of the sub-problems that need to be solved is convex, and according to [Cai et al.2006b] the objective function value will decrease after each individual weight update and eventually this procedure will converge.

We now give this procedure in more detail. Denote the weight vector of the dth mode of the hth tensor at time t as 𝒘h,td. We will update the vectors in turn in the following order: 𝒘1,t1,,𝒘1,tD,𝒘2,t1,,𝒘2,tD,,𝒘H,t1,,𝒘H,tD. Once a vector has been updated, it is fixed for future updates.

By way of notation, define

𝑾h,td (5)
= 𝒘h,t+11,,𝒘h,t+1d-1𝒘h,td,,𝒘h,tD
(and let 𝑾h,tD+1𝒘h,t+11,,𝒘h,t+1D),
𝑾^h,td (6)
= 𝒘h,t+11,,𝒘h,t+1d-1𝒘d,,𝒘h,tD
(where 𝒘dnd),
𝑻h,td = h=1h-1𝑾h,tD+1+𝑾h,td+h=h+1H𝑾h,t1 (7)
𝑻^h,td = h=1h-1𝑾h,tD+1+𝑾^h,td+h=h+1H𝑾h,t1
ϕh,td(x,y) (9)
= 𝚽(x,y)×2𝒘h,t+12×d-1𝒘h,t+1d-1×d+1
𝒘h,td+1×D𝒘h,tD

In order to update from 𝒘h,td to get 𝒘h,t+1d, the sub-problem to solve is:

min𝒘dnd12𝑻^h,td-𝑻h,td2+Cξ
= min𝒘dnd12𝑾^h,td-𝑾h,td2+Cξ
= min𝒘dnd12βh,t+11βh,t+1d-1βh,td+1βh,tD
    𝒘d-𝒘h,td2+Cξ
s.t. h,tdξ,ξ0.

where

βh,td = 𝒘h,td2
h,td = 𝑻^h,td𝚽(xt,zt)-𝑻^h,td𝚽(xt,yt)
+ρ(yt,zt)
= 𝒘d(ϕh,td(xt,zt)-ϕh,td(xt,yt))
-(h=1h-1𝑾h,tD+1+h=h+1H𝑾h,t1)
(𝚽(xt,yt)-𝚽(xt,zt))
+ρ(yt,zt)

Letting

Δϕh,tdϕh,td(xt,yt)-ϕh,td(xt,zt)

and

sh,td(h=1h-1𝑾h,tD+1+h=h+1H𝑾h,t1)
  (𝚽(xt,yt)-𝚽(xt,zt))

we may compactly write

h,td=ρ(yt,zt)-sh,td-𝒘dΔϕh,td.

This convex optimization problem is just like the original MIRA and may be solved in a similar way. The updating strategy for 𝒘h,td is derived as

𝒘h,t+1d=𝒘h,td+τΔϕh,td
τ= (10)
min{C,ρ(yt,zt)-𝑻h,td(𝚽(xt,yt)-𝚽(xt,zt))Δϕh,td2}

The initial vectors 𝒘h,1i cannot be made all zero, since otherwise the l-mode product in Equation (9) would yield all zero ϕh,td(x,y) and the model would never get a chance to be updated. Therefore, we initialize the entries of 𝒘h,1i uniformly such that the Frobenius-norm of the weight tensor 𝑾 is unity.

We call the algorithm above “Tensor-MIRA” and abbreviate it as T-MIRA.

5 Experiments

In this section we shows empirical results of the training algorithm on a parsing task. We used the Charniak parser [Charniak et al.2005] for our experiment, and we used the proposed algorithm to train the reranking feature weights. For comparison, we also investigated training the reranker with Perceptron and MIRA.

5.1 Experimental Settings

To simulate a low-resource training environment, our training sets were selected from sections 2-9 of the Penn WSJ treebank, section 24 was used as the held-out set and section 23 as the evaluation set. We applied the default settings of the parser. There are around V=1.33 million features in all defined for reranking, and the n-best size for reranking is set to 50. We selected the parse with the highest f-score from the 50-best list as the oracle.

We would like to observe from the experiments how the amount of training data as well as different settings of the tensor degrees of freedom affects the algorithm performance. Therefore we tried all combinations of the following experimental parameters:

Parameters Settings
Training data (m) Sec. 2, 2-3, 2-5, 2-9
Tensor order (D) 2, 3, 4
# rank-1 tensors (H) 1, 2, 3
Vec. to tensor mapping approximate, sequential

Here “approximate” and “sequential” means using, respectively, the algorithm given in Figure 4 and the sequential mapping mentioned in Section 3.4. According to the strategy given in 3.2, once the tensor order and number of features are fixed, the sizes of modes and total number of parameters to estimate are fixed as well, as shown in the tables below:

D Size of modes Number of parameters
2 1155×1155 2310
3 110×110×111 331
4 34×34×34×34 136

5.2 Results and Analysis

The f-scores of the held-out and evaluation set given by T-MIRA as well as the Perceptron and MIRA baseline are given in Table 5. From the results, we have the following observations:

  1. 1.

    When very few labeled data are available for training (compared with the number of features), T-MIRA performs much better than the vector-based models MIRA and Perceptron. However as the amount of training data increases, the advantage of T-MIRA fades away, and vector-based models catch up. This is because the weight tensors learned by T-MIRA are highly structured, which significantly reduces model/training complexity and makes the learning process very effective in a low-resource environment, but as the amount of data increases, the more complex and expressive vector-based models adapt to the data better, whereas further improvements from the tensor model is impeded by its structural constraints, making it insensitive to the increase of training data.

  2. 2.

    To further contrast the behavior of T-MIRA, MIRA and Perceptron, we plot the f-scores on both the training and held-out sets given by these algorithms after each training epoch in Figure 7. The plots are for the experimental setting with mapping=surrogate, # rank-1 tensors=2, tensor order=2, training data=sections 2-3. It is clearly seen that both MIRA and Perceptron do much better than T-MIRA on the training set. Nevertheless, with a huge number of parameters to fit a limited amount of data, they tend to over-fit and give much worse results on the held-out set than T-MIRA does.

    As an aside, observe that MIRA consistently outperformed Perceptron, as expected.

  3. 3.

    Properties of linear tensor model: The heuristic vector-to-tensor mapping strategy given by Figure 4 gives consistently better results than the sequential mapping strategy, as expected.

    To make further comparison of the two strategies, in Figure 8 we plot the 20 largest singular values of the matrices which the surrogate weights (given by the Perceptron after running for 1 epoch) are mapped to by both strategies (from the experiment with training data sections 2-5). From the contrast between the largest and the 2nd-largest singular values, it can be seen that the matrix generated by the first strategy approximates a low-rank structure much better than the second strategy. Therefore, the performance of T-MIRA is influenced significantly by the way features are mapped to the tensor. If the corresponding target weight tensor has internal structure that makes it approximately low-rank, the learning procedure becomes more effective.

    The best results are consistently given by 2nd order tensor models, and the differences between the 3rd and 4th order tensors are not significant. As discussed in Section 3.1, although 3rd and 4th order tensors have less parameters, the benefit of reduced training complexity does not compensate for the loss of expressiveness. A 2nd order tensor has already reduced the number of parameters from the original 1.33 million to only 2310, and it does not help to further reduce the number of parameters using higher order tensors.

  4. 4.

    As the amount of training data increases, there is a trend that the best results come from models with more rank-1 component tensors. Adding more rank-1 tensors increases the model’s complexity and ability of expression, making the model more adaptive to larger data sets.

  • {subtable}
    Mapping Approximate Sequential
    Rank-1 tensors 1 2 3 1 2 3
    Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4
    Held-out score 89.43 89.16 89.22 89.16 89.21 89.24 89.27 89.14 89.24 89.21 88.90 88.89 89.13 88.88 88.88 89.15 88.87 88.99
    Evaluation score 89.83 89.69
    MIRA 88.57
    Percep 88.23
    Table 1: Training data: Section 2 only
    {subtable}
    Mapping Approximate Sequential
    Rank-1 tensors 1 2 3 1 2 3
    Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4
    Held-out score 89.26 89.06 89.12 89.33 89.11 89.19 89.18 89.14 89.15 89.2 89.01 88.82 89.24 88.94 88.95 89.19 88.91 88.98
    Evaluation score 90.02 89.82
    MIRA 89.00
    Percep 88.59
    Table 2: Training data: Section 2-3
    {subtable}
    Mapping Approximate Sequential
    Rank-1 tensors 1 2 3 1 2 3
    Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4
    Held-out score 89.40 89.44 89.17 89.5 89.37 89.18 89.47 89.32 89.18 89.23 89.03 88.93 89.24 88.98 88.94 89.16 89.01 88.85
    Evaluation score 89.96 89.78
    MIRA 89.49
    Percep 89.10
    Table 3: Training data: Section 2-5
    {subtable}
    Mapping Approximate Sequential
    Rank-1 tensors 2 3 4 2 3 4
    Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4
    Held-out score 89.43 89.23 89.06 89.37 89.23 89.1 89.44 89.22 89.06 89.21 88.92 88.94 89.23 88.94 88.93 89.23 88.95 88.93
    Evaluation score 89.95 89.84
    MIRA 89.95
    Percep 89.77
    Table 4: Training data: Section 2-9
    Table 5: Parsing f-scores. Tables (a) to (d) correspond to training data with increasing size. The upper-part of each table shows the T-MIRA results with different settings, the lower-part shows the MIRA and Perceptron baselines. The evaluation scores come from the settings indicated by the best held-out scores. The best results on the held-out and evaluation data are marked in bold.
{subfigure}

[b]0.4

Figure 5: Training set
{subfigure}

[b]0.4

Figure 6: Held-out set
Figure 7: f-scores given by three algorithms on training and held-out set (see text for the setting).
Figure 8: The top 20 singular values of the surrogate weight matrices given by two mapping algorithms.

6 Conclusion and Future Work

In this paper, we reformulated the traditional linear vector-space models as tensor-space models, and proposed an online learning algorithm named Tensor-MIRA. A tensor-space model is a compact representation of data, and via rank-1 tensor approximation, the weight tensor can be made highly structured hence the number of parameters to be trained is significantly reduced. This can be regarded as a form of model regularization.Therefore, compared with the traditional vector-space models, learning in the tensor space is very effective when a large feature set is defined, but only small amount of training data is available. Our experimental results corroborated this argument.

As mentioned in Section 3.2, one interesting problem that merits further investigation is how to determine optimal mode sizes. The challenge of applying a tensor model comes from finding a proper tensor structure for a given problem, and the key to solving this problem is to find a balance between the model complexity (indicated by the order and sizes of modes) and the number of parameters. Developing a theoretically guaranteed approach of finding the optimal structure for a given task will make the tensor model not only perform well in low-resource environments, but adaptive to larger data sets.

7 Acknowledgements

This work was partially supported by IBM via DARPA/BOLT contract number HR0011-12-C-0015 and by the National Science Foundation via award number IIS-0963898.

References

  • Cai et al.2006a Deng Cai , Xiaofei He , and Jiawei Han. 2006. Tensor Space Model for Document Analysis Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR), 625–626.
  • Cai et al.2006b Deng Cai, Xiaofei He, and Jiawei Han. 2006. Learning with Tensor Representation Technical Report, Department of Computer Science, University of Illinois at Urbana-Champaign.
  • Charniak et al.2005 Eugene Charniak, and Mark Johnson 2005. Coarse-to-fine n-Best Parsing and MaxEnt Discriminative Reranking Proceedings of the 43th Annual Meeting on Association for Computational Linguistics(ACL) 173–180.
  • Chiang et al.2008 David Chiang, Yuval Marton, and Philip Resnik. 2008. Online Large-Margin Training of Syntactic and Structural Translation Features Proceedings of Empirical Methods in Natural Language Processing(EMNLP), 224–233.
  • Cohen and Collins2012 Shay Cohen and Michael Collins. 2012. Tensor Decomposition for Fast Parsing with Latent-Variable PCFGs Proceedings of Advances in Neural Information Processing Systems(NIPS).
  • Cohen and Satta2013 Shay Cohen and Giorgio Satta. 2013. Approximate PCFG Parsing Using Tensor Decomposition Proceedings of NAACL-HLT, 487–496.
  • Collins2002 Michael Collins. 2002. Discriminative training methods for hidden Markov Models: Theory and Experiments with Perceptron. Algorithms Proceedings of Empirical Methods in Natural Language Processing(EMNLP), 10:1–8.
  • Crammer et al.2006 Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Schwartz, and Yoram Singer. 2006. Online Passive-Aggressive Algorithms Journal of Machine Learning Research(JMLR), 7:551–585.
  • Fazel2002 Maryam Fazel. 2002. Matrix Rank Minimization with Applications PhD thesis, Stanford University.
  • Gimpel and Smith2012 Kevin Gimpel, and Noah A. Smith 2012. Structured Ramp Loss Minimization for Machine Translation Proceedings of North American Chapter of the Association for Computational Linguistics(NAACL), 221-231.
  • Hazan et al.2005 Tamir Hazan, Simon Polak, and Amnon Shashua 2005. Sparse Image Coding using a 3D Non-negative Tensor Factorization Proceedings of the International Conference on Computer Vision (ICCV).
  • Hopkins and May2011 Mark Hopkins and Jonathan May. 2011. Tuning as Reranking Proceedings of Empirical Methods in Natural Language Processing(EMNLP), 1352-1362.
  • Kolda and Bader2009 Tamara Kolda and Brett Bader. 2009. Tensor Decompositions and Applications SIAM Review, 51:455-550.
  • McDonald et al.2005 Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers Proceedings of the 43rd Annual Meeting of the ACL, 91–98.
  • Shashua and Hazan2005 Amnon Shashua, and Tamir Hazan. 2005. Non-Negative Tensor Factorization with Applications to Statistics and Computer Vision Proceedings of the International Conference on Machine Learning (ICML).
  • Van de Cruys et al.2013 Tim Van de Cruys, Thierry Poibeau, and Anna Korhonen. 2013. A Tensor-based Factorization Model of Semantic Compositionality Proceedings of NAACL-HLT, 1142–1151.

Appendix A Proof of Theorem 1

Proof.

For D=1, it is obvious that if a set of real numbers {x1,,xn} can be represented by a rank-1 matrix, it can always be represented by a vector, but the reverse is not true.

For D>1, if {x1,,xn} can be represented by 𝒫=𝐩1𝐩2𝐩D, namely xi=𝒫i1,,iD=d=1Dpidd, then for any component vector in mode d,

[p1d,p2d,,pndd]=[s1dp1d,s2dp1d,,sndpdp1d]

where ndp is the size of mode d of 𝒫, sjd is a constant and sjd=pi1,,id-1,j,id+1,,iDpi1,,id-1,1,id+1,,iD Therefore

xi=𝒫i1,,iD=x1,,1d=1Dsidd (11)

and this representation is unique for a given D(up to the ordering of 𝐩j and sjd in 𝐩j, which simply assigns {x1,,xn} with different indices in the tensor), due to the pairwise proportional constraint imposed by xi/xj,i,j=1,,n.

If xi can also be represented by 𝒬, then xi=𝒬i1,,iD+1=x1,,1d=1D+1tidd, where tjd has a similar definition as sjd. Then it must be the case that

d1,d2{1,,D+1},d{1,,D},d1d2
s.t.
tid1d1tid2d2=sidd, (12)
tidada=sidbdb, dad1,d2, dbd

since otherwise {x1,,xn} would be represented by a different set of factors than those given in Equation (11).

Therefore, in order for tensor 𝒬 to represent the same set of real numbers that 𝒫 represents, there needs to exist a vector [s1d,,sndd] that can be represented by a rank-1 matrix as indicated by Equation (12), which is in general not guaranteed.

On the other hand, if {x1,,xn} can be represented by 𝒬, namely

xi=𝒬i1,,iD+1=d=1D+1qidd

then we can just pick d1{1,,D},d2=d1+1 and let

𝐪=[q1d1q1d2,q1d1q2d2,,qnd2qd1qnd1qd2]

and

𝒬=𝐪1𝐪d1-1𝐪𝐪d2+1𝐪D+1

Hence {x1,,xn} can also be represented by a Dth order tensor 𝒬. ∎