American Journal of Applied Mathematics
Volume 4, Issue 2, April 2016, Pages: 99-104

Modularity Component Analysis versus Principal Component Analysis

Hansi Jiang, Carl Meyer

North Carolina State University, Raleigh, NC, USA

Email address:

(Hansi Jiang)
(C. Meyer)

*Corresponding author

To cite this article:

Hansi Jiang, Carl Meyer. Modularity Component Analysis versus Principal Component Analysis. American Journal of Applied Mathematics. Vol. 4, No. 2, 2016, pp. 99-104. doi: 10.11648/j.ajam.20160402.15

Received: January 23, 2016; Accepted: March 20, 2016; Published: April 11, 2016


Abstract: In this paper the exact linear relation between the leading eigenvectors of the modularity matrix and the singular vectors of an uncentered data matrix is developed. Based on this analysis the concept of a modularity component is defined, and its properties are developed. It is shown that modularity component analysis can be used to cluster data similar to how traditional principal component analysis is used except that modularity component analysis does not require data centering.

Keywords: Data Clustering, Graph Partitioning, Modularity Matrix, Principal Component Analysis


1. Introduction

The purpose of this paper is to present a development of modularity components that are analogous to principal value components [8]. It will be shown that modularity components have characteristics that are similar to those of principal value components in the sense that modularity components provide for data analysis in much the same manner as do principal value components. In particular, just as in the case of principal value components, modularity components are shown to be mutually orthogonal, and raw data can be projected onto the directions of a number of modularity components to reveal patterns and clusters in the data. However, a drawback of principal component analysis (PCA) is that it generally requires centering or standardizing the data before determining principal components. On the other hand, utilizing modularity components does not require data to be centered to accurately extract important information. Among other things, this means that sparsity in the original data is preserved whereas centering data naturally destroys inherent sparsity.

Moreover, we will complete the comparison of modularity components with principal components by showing that the component that maximizes the modularity function of the uncentered data as defined in [14] can replace the principal component that maximizes the variance in the centered data. Finally, just as each succeeding principal component has maximal variance with the constraint that it is orthogonal to all previous principal components, each succeeding modularity component has maximal modularity with the constraint that it is orthogonal to all prior modularity components.

Our modularity components are derived from the concept of modularity introduced by Newman and Girvan in [15], and further explained by Newman in [14]. The modularity partitioning method starts with an adjacency matrix or similarity matrix and aims to partition a graph by maximizing the modularity. Assuming the graph containing n nodes, the modularity is defined by

Q(s)=,               (1)

where m is the number of edges in the graph, B is the modularity matrix defined below, and  is a vector that maximizes Q. Since the number of edges in a given graph is constant, the multiplier 1/(4m) is often dropped for simplicity, and the modularity becomes

Q(s)=.               (2)

The modularity matrix is defined by

(3)

where A is an adjacency matrix or similarity matrix, and d = (d1 d2 … dn)T is the vector containing the degrees of the nodes. It is proven in [14] that the eigenvector corresponding to the largest eigenvalue of B can maximize Q. Like the spectral clustering method [18], the modularity clustering method also uses signs of entries in the dominant eigenvector to partition graphs.

The modularity partitioning algorithm has been widely applied and discussed. For instance, it has been applied to reveal human brain functional networks [12] and ecological networks [5], and used in image processing [11]. Blondel et al. [1] proposed a heuristic that can reveal the community structure for large networks. Rotta and Noack [17] compared several heuristics in maximizing modularity. DasGupta and Desai [4] studied the complexity of modularity clustering. The limitations of the modularity maximization technique are discussed in [6] and [9].

By the modularity algorithm [14], a graph is partitioned into two parts, and a hierarchy can be built by iteratively calculating the B matrices and their dominant eigenvectors. Repetitively partitioning a graph into two subsets may be inefficient and does not utilize information in subdominant eigenvectors. And while there is a connection between graph partitioning and data analysis, they are not strictly equivalent because extracting information from raw data by means of graph partitioning necessarily requires the knowledge or creation of a similarity or adjacency matrix, which in turn can only group nodes. For the purpose of data analysis, it is more desirable to analyze raw data without involving a similarity matrix. Modularity analysis can be executed directly from uncentered raw data  (p number of attributes, n number of data points) by redefining the modularity matrix to be

(4)

but in practice  need not be explicitly computed. In addition to using only raw data, this formulation allows the creation of modularity components that are directly analogous to principal value components created from centered data. In what follows, let A = , where the rows of X may be normalized when different units are involved.

The paper is organized as follows. In Section 2 we give the definition of modularity components. In Section 3 properties of the modularity components are established. Section 4 contains some conclusions.

2. Definition of Modularity Components

In this section we will give the definition of the modularity components. Before doing that we will prove a couple of lemmas about the relation between the eigenvectors of a particular kind of similarity matrices that can be fed in the modularity algorithm and the singular vectors of the data matrix. The lemmas will help us to define the modularity components. Suppose the SVD of the uncentered data matrix X is X is X =  and that there are k nonzero singular values. Then

(5)

has k positive eigenvalues. From the interlacing theorem mentioned in [2] and [19], it is guaranteed that the largest k-1 eigenvalues of B = A - ddT / (2m) are positive. If the k eigenvalues of A are simple, then the eigenvectors of B corresponding to the largest k-1 eigenvalues can be written as linear combinations of the eigenvectors of A. The proof of the following lemma can be found in Appendix A.

Lemma 2.1. Suppose the largest k - 1 eigenvalues of B are  and the nonzero eigenvalues of A =  are . Further suppose that for  we have  and . Then the eigenvector bi of B can be written by

(6)

where

(7)

The point of this lemma is to realize that the vector bi is a linear combination of the vi. The next lemma gives the linear expression of the vectors  in terms of the ui, where  is the Moore-Penrose inverse of X. There are practical cases where our assumptions in Lemma 2.1 hold true, and examples are given in Appendix B.

Lemma 2.2. With the assumptions in Lemma 2.1, we have

(8)

where  is the j-th the nonzero singular value of X.

Proof.

.

Lemma 2.2 shows that if bi can be written as a linear combination of the vj, then the vectors  can be written as a linear combination of the ui. Next we give the formal definition of the modularity components.

Definition 2.3. Suppose  is the data matrix, bi is the eigenvector corresponding to the i-th largest eigenvalue of B, where

(9)

Under the assumptions in Lemma 2.1, let

(10)

The i-th modularity component is defined to be

(11)

By the two lemmas, it can be seen that as long as the assumptions in Lemma 2.1 are met, the modularity components are well-defined, and the definition of  is based on the linear combination of  in terms of the ui. In the next section some important properties of the modularity components are established.

3. Properties of the Modularity Components

In this section some properties of modularity components will be discussed. It will be seen that the properties of modularity components are similar to the ones of principal components. First we will prove that the modularity components, as long as they are well-defined, are perpendicular to each other. Then we will prove that if we project the uncentered data onto the span of the modularity components, then the projection will be a scalar multiple of the modularity vectors. Finally, we will prove that the ‘importance’ of each modularity component is given by its corresponding eigenvalue of B. The first modularity component has the largest modularity, and the i-th modularity component has the largest modularity with the constraint that it is perpendicular to the preceding i - 1 modularity components.

Theorem 3.1. With the assumptions in Lemma 2.1, suppose  is the unnormalized data matrix, , . Suppose bi, bj are the eigenvectors of B corresponding to eigenvalues  and , , , respectively. Then we have

(12)

and  for .

Proof. It is sufficient to prove that  for . From  we have

,

,

where e is a column vector with all ones. Therefore,

.

Since  is always true, we have

.

Consequently,

.

Therefore. Since , , , , we have

,

so

implies  for .

From Theorem 3.1, it can be seen that the modularity components are orthogonal to each other. Next we prove that the projection of the uncentered data onto the span of  is a scalar multiple of .

Theorem 3.2. With the assumptions in Lemma 2.1, let  be the projector onto the span of . Then we have

(13)

Proof.

.

This property is similar to that of principal components in the sense that if we project the data onto the span of the components, we get a scalar multiple of a vector, and the vector can give the clusters in the data based on the signs of the entries in the eigenvectors. Finally, we can prove that if we look at X in the space perpendicular to , then the projection onto the span of  will give us the largest modularity, and the projection is just .

Theorem 3.3. With the assumptions in Lemma 2.1,

(14)

Moreover, let  and for , let

(15)

and let ,  be defined correspondingly. Under these conditions,  is the largest eigenvalue of , and  is the corresponding eigenvector of .

Proof. For i = 2, since it is proved in [14] that  is the vector  that maximizes Q in Equation 1.2, we have

.

By Theorem 3.1,

.

Therefore . Then  is defined by

.

Since  is idempotent, we have

.

By Theorem 3.2, we know that , so  and then

.

Plug  into

,

and notice that  (because  and e are eigenvectors corresponding to different eigenvalues of B) to produce

.

So by Brauer’s theorem [13] (Exercise 7.1.17), the eigenpairs of  are the ones of  with (; ) replaced by an eigenpair with zero eigenvalue. So  is the largest eigenvalue of  and  is the eigenvector of B2 corresponding to .

For the cases when , let

.

Notice that  is the vector s that maximizes . Then by similar steps we can prove that .Then  can be defined by

.

It is easy to see that  is idempotent. Then we have

.

Plug  into

,

and notice that  (because  and e are eigenvectors corresponding to different eigenvalues of B) to produce

.

So by Brauer’s theorem again, the eigenpairs of  are the ones of  with (; ) replaced by an eigenpair with zero eigenvalue. So is the largest eigenvalue of  and  is the eigenvector of  corresponding to .

Theorem 3.3 says when we build the new data matrix  from X,  and  change. Also  is different from B, but the eigenpairs of B are retained by Bi except for the first pairs. The conclusion is that the first modularity component has the largest modularity of the data X. Each succeeding modularity component has the largest modularity with the constraint that it is orthogonal to all previous modularity components.

4. Conclusion

In this paper, the concept of modularity components is defined, and some important properties of modularity components are proven. The concept of modularity components can be used to explain why using more than one eigenvectors of the modularity matrix to do data clustering is reasonable. The combination of modularity clustering and modularity components gives a modularity component analysis that has some nice properties similar to the well known principal component analysis.

Appendices A Proof of Lemma 2.1

The lemma is based on a theorem from [2] about the interlacing property of a diagonal matrix and its rank-one modification and how to calculate the eigenvectors of a diagonal plus rank one (DPR1) matrix [13]. The theorem can also be found in [19].

Theorem A.1. Let , where D is diagonal, . Let  be the eigenvalues of D, and let  be the eigenvalues of C. Then  if . If the  are distinct and all the elements of v are nonzero, then the eigenvalues of C strictly separate those of D.

Corollary A.2. With the notations in Theorem A.1, the eigenvector of C corresponding to the eigenvalue  is given By .

Theorem A.1 tells us the eigenvalues of a DPR1 matrix are interlaced with the eigenvalues of the original diagonal matrix. Next we will write the eigenvector corresponding to the positive eigenvalues of a modularity matrix as a linear combination of the eigenvectors of the corresponding adjacency matrix.

With the notations in Section 1, since , then if the SVD of X is , then

,

where  is an  diagonal matrix. Suppose the rows and columns of A are ordered such that , where . Let . Similarly, since B is symmetric, it is orthogonally similar to a diagonal matrix. Suppose the eigenvalues of B are  with largest  eigenvalues .

Proof. Since , we have

,

where  and . Since  is also symmetric, it is orthogonally similar to a diagonal matrix. So we have

where  is orthogonal and  is diagonal. Since  is a DPR1 matrix,  and , the interlacing theorem applies to the eigenvalues of A and B. More specifically, we have

.

The strict inequalities hold because of our assumptions. Let . Since , we have . Suppose (, u) is an eigenpair of , then

implies that (, u) is an eigenpair of  if and only if (, Vu) is an eigenpair of B. By Corollary A.2, the eigenvector of  corresponding to  is given by

,

and hence the eigenvector of B corresponding to  is given by

.

Since  where e is a column vector with all ones, we have

.

Since rank(A) = k, we have for j > k. Therefore, the eigenvector of B corresponding to  is given by

,

where

.

Appendices B Examples Satisfy the Assumptions in Lemma 2.1

We used two subsets of the popular MNIST data set from the literature, and the data set is described below. The Pen Digit data sets are subsets of the widely used MNIST database [10] [20] [7] [3] [16]. The original data contains a training set of 60,000 handwritten digits from 44 writers. The first subset used in the experiments contains some of the digits 1, 5 and 71. The second subset used contains some of the digits 1, 7 and 9. Each piece of data is a row vector converted from a grey-scale image. Each image is 28 pixels in height and 28 pixels in width, so there are 784 pixels in total. Each row vector contains the label of the digit and the lightness of each pixel. Lightness of a pixel is represented by a number from 0 to 255 inclusively, and smaller numbers represent lighter pixels.

The  matrix of the 1-5-7 subset has 644 eigenvalues  that are positive, and the largest 643 eigenvalues  of the B matrix are different from both  and .The  matrix of the 1-7-9 subset has 623 eigenvalues  that are positive, and the largest 622 eigenvalues  of the B matrix are different from  and . Thus we conclude that these examples satisfy the assumptions in Lemma 2.1.


References

  1. V. D. BLONDEL, J.-L. GUILLAUME, R. LAMBIOTTE, AND E. LEFEBVRE, Fast unfolding of communities in large networks, Journal of statistical mechanics: theory and experiment, 2008 (2008), p. P10008.
  2. J. R. BUNCH, C. P. NIELSEN, AND D. C. SORENSEN, Rank-one modification of the symmetric eigenproblem, Numerische Mathematik, 31 (1978), pp. 31–48.
  3. R. CHITTA, R. JIN, AND A. K. JAIN, Efficient kernel clustering using random fourier features, in Data Mining (ICDM), 2012 IEEE 12th International Conference on, IEEE, 2012, pp. 161–170.
  4. B. DASGUPTA AND D. DESAI, On the complexity of newmans community finding approach for biological and social networks, Journal of Computer and System Sciences, 79 (2013), pp. 50–67.
  5. M. A. FORTUNA, D. B. STOUFFER, J. M. OLESEN, P. JORDANO, D. MOUILLOT, B. R. KRASNOV, R. POULIN, AND J. BASCOMPTE, Nestedness versus modularity in ecological networks: two sides of the same coin?, Journal of Animal Ecology, 79 (2010), pp. 811–817.
  6. B. H. GOOD, Y.-A. DE MONTJOYE, AND A. CLAUSET, Performance of modularity maximization in practical contexts, Physical Review E, 81 (2010), p. 046106.
  7. T. HERTZ, A. BAR-HILLEL, AND D. WEINSHALL, Boosting margin based distance functions for clustering, in Proceedings of the twenty-first international conference on Machine learning, ACM, 2004, p. 50.
  8. JOLLIFFE, Principal component analysis, Wiley Online Library, 2002.
  9. A.LANCICHINETTI AND S. FORTUNATO, Limits of modularity maximization in community detection, Physical review E, 84 (2011), p. 066122.
  10. Y. LECUN, L. BOTTOU, Y. BENGIO, AND P. HAFFNER, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (1998), pp. 2278–2324.
  11. R. A. MERCOVICH, A. HARKIN, AND D. MESSINGER, Automatic clustering of multispectral imagery by maximization of the graph modularity, in SPIE Defense, Security, and Sensing, International Society for Optics and Photonics, 2011, pp. 80480Z–80480Z.
  12. D. MEUNIER, R. LAMBIOTTE, A. FORNITO, K. D. ERSCHE, AND E. T. BULLMORE, Hierarchical modularity in human brain functional networks, Hierarchy and dynamics in neural networks, 1 (2010), p. 2.
  13. C. D. MEYER, Matrix analysis and applied linear algebra, Siam, 2000.
  14. M. E. NEWMAN, Modularity and community structure in networks, Proceedings of the National Academy of Sciences, 103 (2006), pp. 8577–8582.
  15. M. E. NEWMAN AND M. GIRVAN, Finding and evaluating community structure in networks, Physical review E, 69 (2004), p. 026113.
  16. S. L. RACE, C. MEYER, AND K. VALAKUZHY, Determining the number of clusters via iterative consensus clustering, in Proceedings of the SIAM Conference on Data Mining (SDM), SIAM, 2013, pp. 94–102.
  17. R. ROTTA AND A. NOACK, Multilevel local search algorithms for modularity clustering, Journal of Experimental Algorithmics (JEA), 16 (2011), pp. 2–3.
  18. U. VON LUXBURG, A tutorial on spectral clustering, Statistics and computing, 17 (2007), pp. 395–416.
  19. J. H. WILKINSON, The algebraic eigenvalue problem, vol. 87, Clarendon Press Oxford, 1965.
  20. R. ZHANG AND A. I. RUDNICKY, A large scale clustering scheme for kernel k-means, in Pattern Recognition, 2002. Proceedings. 16th International Conference on, vol. 4, IEEE, 2002, pp. 289–292.

Footnotes

[1] The data can be downloaded at http://www.kaggle.com/c/digitrecognizer/data

Article Tools
  Abstract
  PDF(227K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931