Below are the first 10 and last 10 pages of uncorrected machineread text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapterrepresentative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 209
1 
Spectral methods for analyzing and visualizing networks: an introduction
Andrew J. Seary and William .D. Richards
School of Communication
Simon Fraser University
Bu~naby BC Canada V5A 1 S6
seary~sfi~.ca richards~sfi~.ca
Abstract
Network analysis begins with data that describes the set of relationships among the members of
a system. The goal of analysis is to obtain Domthe lowlevel relational data a higherlevel description
of the structure of the system which identifies venous kinds of patterns In the set of relationships.
These patterns wiD be based on the way individuals are related to other individuals in the network.
Some approaches to network analysis look for clusters of individuals who are tightly connected to
One another; some look for sets of individuals who have similar patterns of relations to the rest ofthe
network. Other methods don't "look for" anything in particular instead, they construct a
continuous multid~mensionaIrepresentationofthe network In which the coordinates ofthe individuals
can be further analyzed to obtain a variety of kinds of information about them and their relation to
the rest ofthe network.
One approach to this is to choose a set of axes in the multidunensional space occupied by the
network and rotate them so that the first axis points in the direction ofthe greatest vanability in the
data; the second axis, orthogonal to the first, points In the direction of greatest remaining vanability,
and so on. This set of axes is a coordinate system that can be used to describe the relative positions
ofthe set of points in the data. Most ofthe variability In the locations of points wiD be accounted for
by the first few dimensions of this coordinate system. The coordinates ofthe points along each axus
will be an eigenvector, axle the length of the projection will be an eigerlva~ue. The set of all
eigenvalues is the spectrum ofthe network.
Spectral methods (eigendecomposition) have been a part of graph theory for over a century.
Network researchers have used spectral methods either implicitly or explicitly since the late 1960's,
when computers became generally accessible in most universities. The eigenvalues of a network are
intimately connected to important topological features such as magnum distance across the network
(diameter), presence of cohesive clusters, long paths and bottlenecks, and how random the network
is. The associated eigenvectors can be used as a natural coordinate system for graph visualization;
they also provide methods for discovering clusters and other local features. When combined with
other, easily obtained network statistics (e.g., node degree), they can be used to describe a variety
of network properties, such as degree of robustness (i.e., tolerance to removal of selected nodes or
links), and other structural properties, and the relationship of these properties to node or link
attributes In large, complex, multivariate networks.
We introduce three types of spectral analysis for graphs and descnbe some oftheir mathematical
properties. We discuss the strengths and weaknesses of each type and show how they can be used
to understand network structure. These discussions are accompanied by~nteractive graphical displays
of small (n=50) and moderately large (n=5000) networks. Throughout, we give special attention to
sparse matrix methods which allow rapid, efficient storage end analysts of large networks. We briefly
describe aIgonthTns and analytic strategies that allow spectral analysis and identification of clusters
in very large networks (n>l,000,000~.
DYNAMIC SOCKS N~TWO=MODEL~G ED TRYSTS
209
OCR for page 209
—2—
Introduction
A standard method In statistics for handling multivariate data is to find the directions of maximum
variability, usuaDyofvanancecovanance or correlation matrices These directions are caped Principal
Coordinates or eiger~vectors, while the relative importance of each direction is represented by
numbers called eigenva~ues. Collide, ~ 986) Finding this coordinate system may be accomplished by
a series of rotations (although this is not the most efficient method) that end up pointing along the
direction of maximum variability, with the second largest maximum variability at right angles, and so
on. As a result, the data matrix Is reduced to a diagonal matnx, with diagonal entries co~Tespond~ng
to the importance (eigenva~z~e) of each direction (eigenvector3. The collection of all eigenvalues is
caned the spectrum. One goal is to reduce the problem so that only the most import ant dimensions
(those with the largest eigenvalues) contain most of the vanability. Implicit ~ these methods
(variancecovariance or correlation) is that some kind of"expected" or "background" signalhas been
subtracted: In the case of variances, these would be the means of each vanable In the original data
matrix. To find these eigenvectors and eigenva~ues we need to solve the eigenva~ue equation:
Ee=£e
(we wid derive this equation below) which states that along the direction represented by vector e,
multiplication by data matrix E does not change the direction, but only the length (where £ may be
any number, including 0~.~ The related pad (s,e) Is caned an eigenpair of matrix E.
A network or graph G(V,E) is a set of nodes V (points, vertices) connected by a set of links E
(lines, edges). We win consider networks that are binary (edges have logicalvalue 1 if en edge exists,
O if not), symmetric (an edge Mom node i toj implies an edge Dom node j to i), connected (there is
a set of edges connecting any two nodes, consequently only one component), and without selfloops
(no edges between i and i). We may represent such a network as the adjacency matrix A = A(G)
with:
~ In row i, columnj if i is connected to j,
O otherwise.
We wiD not directly discuss weighted networks, where the entries for an edge may be a number other
than I, although most ofthe results that follow generalize to such networks. For many"real world"
networks, A consists mostly of 0's: it is sparse. We win discuss efficient ways of storing and
manipulating A using sparse methods.
Associated with A is the degree clistributiorl D, a diagonal matrix with rowsums of A along the
diagonal, and 0's elsewhere. D describes how many connections each node has. We cad the number
of nodes, m, the order of G and it is equal to the number of rows or columns of A. We represent the
number of edges by FEN. We win also introduce two other matrices related to A:
· the Laplacian: L=DA
· the Normal: N=D'A
and wiD discuss the properties ofthe spectrum arid associated eigenvectors of A, A, and N.
~ We have introduced some notation which will be followed throughout:
· mamces are represented by bold capitals: D
(column)vectors are represented by bold lower case: e
inner products of vectors are represented as eTe = n (a scalar) where e' is the transpose of e.
Outer products of vectors are represented as eeT = M (a matrix)
eigenvalues are represented by "reek letters, usually with some relationship to the latin letters representing a
matrix and an eigenvector. E.g.,( al, al ~ is an eigenpair of adjacency matrix A.
210
DYNAMIC SOCKS NETWO~MOD~L~G ED CYSTS
OCR for page 209
Distances and diameter:
One important property of a network is the set of distances between any pair of nodes i Andy; that
is, the least number of links between any pairs i and I. One way of calculating this is to take powers
ofthe matrix A as follows:
I~ power A = A by definition gives a matrix of all pairs of nodes linked to each other.
2nd power = AA has a nonzero in row i colurnnj if j is two steps away from i. Since i is 2
steps away Mom itself, the diagonal i,i entry counts the number ofthese 2steps.
3rd power = AAA has a nonzero entry in row i columns if j is 3 steps away from i.
Eventually, some power of A, say AN, will cor sist of entirely nonzero entries, meaning every node
has been reached from every other node. We call N the diameter ofthe graph: the longest possible
path between any pair of nodes.
This is a very inefficient way of calculating the diameter of a graph for two reasons:
l) calculating each power of A requires m3 calculations
2) as more nodes are reached, the powers of A become less sparse until eventually no 0's
remain: the amount of storage required approaches m2 .
If we continue taking powers of A, an interesting thing happens: all the columns become multiples
of each other. Taking higher powers of A corresponds to taking longer 'walks" along the edges and
we can interpret this as a "loss of memory" about where we started from (Lovasz, 1995). We will see
why this happens soon, as well as other examples of this phenomenon.
We can approach this problem another way by the properties of the spectral decomposition of
A (Parlett, ~ 980~. Let al be the eigenvalues of A and a i the corresponding eigenvectors, with
cc02a~ 2a2 ... 20$1, ~ and ~a; ~~ = 1 (the eigenvectors are normalized to length 1~. Then the spectral
decomposition of A is:
(1) A = Pi (aj~ajajT where a; ajT is an mxm matrix defining a ldimension subspace and
(ajaiT)N=aiajT if it aiajT= o if it
therefore AN = Hi; (aj)Na; ajT for any power N. and this allows an easy way of calculating powers of
A, assuming we have already calculated all the eigenpairs (a, a i ~
Another important property ofthe spectral decomposition is the approximation property. If we
take the first k of the eigenpairs (or;, aj ), then Ak =~ 0 at aj ajT is the best leastsquares approx
imation to A, meaning that we have captured most ofthe variability of A in the important eigenpairs.
For example, we can estimate an upper bound for the diameter using the secondlargest eigenvalue
al (Chung, 19894:
Diam(G) < ~ln(ml)/ln(k/`X~1
Unfortunately, this bound applies only to kregular networks (all degrees = k = A). We will get
better bounds for general networks using different spectra. Nevertheless, this bound does show one
relationship between the spectrum and an important property like diameter. In particular, when k/a~
is large (there is a large gap between the first two eigenvalues), the upper bound on diameter is
small, so aD distances are short.
DYNASTIC SOCIAL N~TWORKAdODELING AND ANALYSIS
2 1 1
OCR for page 209
~
The Power Method and Sparse methods:
Using (1) and eigenpair (off, a O) we can see why taking taking large enough powers of A results
in columns that are multiples of one another  in fact, multiples of eigenvector a 0. This is the basis
ofthe Power method (Hotelling, 1933) for finding eigenpairs. We have mentioned that taking powers
of matrix A is not efficient, so we introduce a representation and methods that are far more efficient.
A very simple way of storing and manipulating a sparse matrix A is to use a link list representa
tion, which stores only the nonzero entries of A as a list of pairs id for each link in A. We could then
calculate the diameter of A by starting at i=1 and folio winy each link until we have reached every
node, repeat for i=2, and save the magnum number of steps. This requires about mlEI operations
(Aho, et al., 1987) and a very moderate amount storage equal to 2IEl. We can now use (1) to devise
a very efficient version of the Power Method for finding the largest eigenpair:
Starting with some random vector p normalized to length 1:
Repeat p' ~ Ap, q ~ p, p ~ p' until p is no longer changing in direction.
Then the largest eigenpair of A is (p/q, p). There are some bookkeeping details: Ap uses the link
list representation, and the entries of p' must be adjusted in size after each multiplication (for details
see Richards & Seary, 2000), but the method will always work for any matrix without repeated
eigenvalues, which is generally the case for social networks.
If we want more eigenpa~rs, we can iterate with
p ~ Mp aO aO aO
to get the second, and with
p' ~ Mp off aO aOT at a, apt
to get the third, and so on, without destroying sparsity. However, we must store the (ai, a I) eigen
pairs somewhere; the procedure is subject to loss of precision on a computer; and the iterations may
converge slowly if or /cc ~ is close to 1. There are better methods, such as Lanczos iteration (Parlett
et al. ,1982) which converge very rapidly and do not have problems with loss of precision.
Some network invanants:
Some properties of A remain unchanged (invariant) under the series of orthogonal rotations that
diagonalize A (eigendecomposition). We will relate these to some network invariants of A.
The eigenvalues of any symmetric matrix M are the roots of the characteristic polynomial:
xm + C xm~1 ~ c xm2 +C Xm3 + C
Therefore, c1 = ocO + al ~ ... + on l (sum over all eigenvalues) of A;
c2 = JO al +a~ a: ... + (x<' am ~ ... +am 3 a~l + am 2 ant l (sum over all pairs);
C3 = aO al a? + OCO al 0C3 + + Ocm3 am2 °Cm ~ (Slim over all triples)
The trace of a matrix is the sum of the entries on the diagonal, and this is invariant under
orthogonal rotations. Since A has trace of O (no selfloops), C3 = 0. The sum ofproduct pairs is equal
to minus the number of edges so that c2 = IEI. Most important is C3 which is twice the number of
triangles in G. Higher coefficients are related to cycles of length 4, 5,... although they also contain
contributions for shorter cycles (Biggs, 1993~. It appears that the eigenvalues of A encode
information about the cycles of a network as well as its diameter. We will see related results for the
other two spectra.
212
DYNAMIC SOCIAL NETWORK MODELING AlID ANAIYSIS
OCR for page 209
5—
A bipartite network is one that can be partitioned so that the nodes in one part have connections
only to nodes In the other part, and viceversa. Such a network cannot have odd cycles (of any
length) and hence no triangles. This means aR the odd coefficients c2k ~ must be 0. It can also be
shown (Biggs, 1993) that, In bipartite networks, the eigenvalues occur in pairs with opposite signs,
so that of = ocm ~ and so on. Bipartite networks can be used to represent twomode networks
(Wasserman & Faust, 1994), for example the network relating people and the events they attend.
These results scratch the surface ofthe information contained In the spectrum of A for kregular
graphs. For general graphs, we need to turn to other spectra.2
The Laplacian spectrum:
The Laplacian of a network was ong~naDy discovered by Kirchoff~] 8473. There are a number of
definitions and derivations, perhaps the most revealing due to HaD (1970), who was interested In
situating the nodes of any network so that total edge lengths are muum~zed.
He considers the problem of finding the minimum of the weighted sum
(2) z= I/2~ij~xi Xj)2 ad
where a; j are the elements of the adjacency matrix A. The sum is over ad pairs of squared distances
between nodes which are connected, and so the solution should result in nodes with large numbers
of interconnections being clustered together.
Equation (2) can be rewritten as:
= ]/2~ij (~22xi ~ + Xj 2) aid
= £i xi2 aij + Ej Iij Xi Xj aij = XTLX
=l/2~ixj2ai3l/2~,,,2xixja,j+1/2~jxj2ajj
where ~ = D  A is the Laplacian. In addition to this, Hall supplies the condition that XTX = 1, i.e.,
the distances are normalized. Using Lagrange multipliers (a standard method for solving problems
with constraints), we have:
Z = XTEX  AX X
and to minimize this expression, we take derivatives with respect to X to give:
(3) EX\X=OorEX=\X
which is the eigenvalue equation. It is not hard to show that kO=0 with 10 = 1, the constant (or
trivial) eigenvector, and that ~\O S hi<. An, For L, the most "important" eigenvectors belong to
the smallest eigenvalues (Pothen, et al., 1990~.
It turns out that the discrete network Laplacian shares many important properties with the weD
known continuous Laplacian operator V2 of mathematical physics. This has led to an explosion of
research and results, mostly concerned with k~ (Been, 1991~.
The definition of ~ shows that there is no loss of sparsity (except for the diagonal) and that the
sparse methods mentioned earlier can be applied to find all or some of the eigenpairs. The
requirement that we must find the smallest eigenpairs is easily overcome by subtracting a suitably
large constant from the diagonal ofL (which subtracts that constant Tom the eigenvalues without
changing the eigenvectors).This guarantees that the first eigenpairs returned by the Power Method
or Lanczos iteration are associated with the smallest eigenvalues of L.
2 Similar results maybe obtained Mom the moments of the eigenvalue distribution (Farkas. et al., 2001; Gho, et al.,
2001)
DYNAMIC SOCIAL METWORKMODEL~G~D ^^YSIS
213
OCR for page 209
214
6
Some of the coefficients ofthe charactenstic polynom~al of ~ have an easy ~nterpretation:
cat = Trace(LJ) = 2~E~ (i.e., twice the number of edges)
car, I= 0 (since O is an eigenvalue)
m cm ~ ~ = \0 \! ~ml ~ ;~0 \2 ~mt ~ ~ ~ \~ \2 ~m~ = \~ \2~\m~ = the number of spanning
trees of G (this is the Matnxtree theorem of Kirchoff, I847~.
In general, the eigenvalues of ~ encode info:Tnation about the treestructure of G (Cvetkovic, et
at., 19954. The spectrum of ~ contains a O for every connected component. There is no such direct
way to find the number of components of a network Tom the spectn~m of A. There is also a bound
on diameter related to ~m~ and \~ for general graphs Dom Chung, et al., (1994~:
Diam(G) < ~ cosh~~(m~/ cosh~' (~\m} ~ ;~/~\m] ~ ;~1
Intuitively, if X~ is close to 0, the graph is almost disconnected, while if Al >> X0 (an eigenvalue gap)
the diameter is small.
The Normal spectrum:
We can repeat the same argument as Had to derive the Normal spectrum, with the normalization
constraint that XTDX = ~ (Scary & Richards, ~ 995) to give:
LJX = ,u DX, or assuIIiing that D can be inverted,
D' EX= DO (DA)X=~D~ A)X=,uX
where ~ is an identity matrix of proper size. In fact, we usually take the defining equation to be
(4) DO AX = NX = v X with DO A = N and ~ = IN
since adding an identity matrix shills the eigenvalues by ~ without changing the eigenvectors. Note
that for connected networks D not only has an inverse, it also has an inverse square root Dt'2.
The Normal matrix N has a number of interesting properties:
I) It is a generalized Laplacian (with a different definition of orthonormality)
2) It therefore has a trivial eigenvector no with eigenvalue v0 =
3) The spectrum of N is bounded by ~ = vO >vim ... 2vm ~ 2 ]
4) The rows of N sum to ~ (it is a stochastic matrix)
5) The spectnun of N contains a ~ for every connected component
6) The eigenvalue 1 ordy occurs if G is bipartite, in which case ad eigenvalues occur in pairs.
7) N has been rediscovered a number oftimes: generalized or combinatorial Laplacian (Do~ziuk &
Kendall' 1985; Chung'1995); Qspectrum (Cvetkovic' et al.~1995~.
The descriptive name Nonnal Is suggested by points 2)  5), although it is not standard tellIiinology.
It is easy to see that there is no loss of sparsity ~ the definition of N. Each 1 In row i is simply
replaced by 1fD~ and the O's are unchanged, but N is no longer sywrnetr~c. However, the matrix
D~'2A D~/2 is similar to N (has the same eigenvalues) and we can apply the sparse methods described
above to solve for the eigenpairs (vi, ei) and then calculate D~/2 ei = ni to get the corresponding
eigenvectors without losing precision or sparsity.
For N. the coefficients of the characteristic equation are harder to interpret except In special
cases, but the eigenvalues encode information about both the cycle and tree structure of G
D  ASPIC SOCIAL N~TWORKMODE:L~G AND ANALYSIS
OCR for page 209
7
(Cvetkovic, et al., ~ 9954. Some examples:
c, = Trace(N) = 0
C3 = C2k ~ = 0 (no triangles or other odd cycles) if G is bipartite
Ill Regain; Regain ~F~v) = number of spanning trees of G
In the last example we see how details ofthe degree distribution are also encoded in the spectrum.
Fan Chung uses this to derive two remarkable bounds (see Chung, 1995 for details):
D~c~mfG~ ~ Flax
 r _ _
_] lVO/X Voly
; UNIX VO/Y
—1 (an—1 V1 ~ lll~lli~i dISt(Xi ~ Xi ) 111~Xi~i
could
(~n1 V] )
r _ _
lo 1. i
Nv°iXi W/Xi
I (~n1 ~ V~ )
(~1 ~,( )
where vol X is the total number of edges in a subset of nodes XcV and X is VX. Chung's first
bound applies to any graph (regular or not) and is much tighter than the previous bound (for the
Laplacian). Intuitively, if v, Is close to I, the network has a long path or is almost disconnected, and
if vat << I, the diameter is small.
Chung's second bound describes the distance between subsets for any number k of subsets, based
on the kit eigenvalue. The result suggests that we can use the eigenvalues to estimate how many
subsets we should look for in a network without forcing distances that are too short (and hence too
many subsets).
Intetpreting the Spectra:
Many important properties ofthe spectrum of A(G) where G is kregular are true for L(G) and
N(G), even when G is not regular. Another way of looking at this is that these properties of A are true
because the spectrum of A is simply related to those of ~ and N for regular graphs: At; = kki = vi/k
for kregular graphs, with the corresponding eigenvectors being identical). In other words, both L
and N are more natural Unction of graphs. This point of view is shared by the authors of recent
papers on the Laplacian(Grone, et al., 1990, 1994~. Mohar(1991) presents a collection of important
results relating to the spectrum end eigenvectors of L. Chafing (1995) has written several papers and
a book about N.
We return to the goal expressed in the opening paragraph. We would like to find the most
important global features of a network, after accounting for what could be considered "expected" for
a random network with the same number of nodes and edges. The biggest problem with interpreting
the spectrum of A is the lack of an "expected" eigenvector (again, except for kregular graphs). There
is a lot of literature on the socalled "main eigenvectors" of A: those which have a projection on the
"asones" vector (e.g., Harary& Schwenk, 1 979), but the results remain herd to interpret (Cvetkovic
and Rowl~nson,1990~. Both ~ and N have an "expected" asones eigenvector for which the
interpretation is clear (though different in each case).
To interpret L, we turn to physical analogy and the relation to V2 as discussed by Friedman
(1993~. He considers a graph G as a discrete mar~ifoici (surface) subject to "flee" boundary
conditions.3 For illustration, consider v2 as the spatial part of the wave equation (Fisher, 1966,
3 no external constraints need to be satisfied
DYNAMIC SOCIAL NETWO^:MOD~WG ED ISIS
215
OCR for page 209
—8
Chavel, 1984~. Think of a fishing net subject to no forces. It just lies there at O energy with nothing
happening. As we subject it to regular oscillations, the net vibrates with the most highlyconnected
regions moving together. Friedman shows how the Hilbert Nodal theorem (Courant & Hilbert, 1965)
can be applied to a discrete network, which generalizes FiedIer's result (descnbed below): the kit
eigenvector divides the network into no more than k+1 clisconrzected components.4
To interpret N. we have a number of choices:
1) N is the Laplacian for a network of nodes, each weighted by its degree
2) N Is the transition matrix of a Markov Chain for a simple random walk on the nodes
3) N' is similar to the %2 matnx, thus treating A as a contingency table
The first leads to a physical analogy similar to that for A, so we consider 2) and 3~:
The Normal spectrum and Random walks:
Specifically, we consider nearestneighbour random walks on a network (Lawler & Sokal, 19881.
Define the probabilitytransition matrix for such a walk as
N=D~ A
Then the probability of moving Tom vertex i to any vertex adjacent to i is uniform. N is a row
stochastic matrix, and the random walk is a Markov chain. In this case 1 (the trivial allones
eigenvector) is related to the stationary state ofthe Markov Chain: the probability is ~  vO that such
a probability distribution is eventually reached.5 The vector p0 = IT ~ =N T ~ iS the stationary state,
and it is proportional to the degree distribution. The second eigenpair(v~, no ) has become important
In the analysis of rapicily mixing Markov chains  those that reach the stationary state quickly
(S~ncIair, 1995~. From the previous discussion it should not be surprising that these are associated
with vat << 1 (a large eigenvalue gap), which means that the walk quickly "forgets" where it started.6
Moreover, when v~ is close to I, there must be parts of the network that are hard to reach in a
random walk, unply~ng long paths or a nearly disconnected network.
Normal spectrum and 72:
The y2 matrix is defined in terms ofthe row and column marginals (sums). A typical element Is
(Observed jj  Expected jj)2 /Expected jj which is not sparse. For a sparse network A, consider X which
has a typical element (Observed j,  Expected jj) // Expected ij where
deg (i) deg (I)
Expected ij 
~ de"(i)
4 This interpretation of the eigenvectors may be even more useful when considering v2 as the spatial part of the
Di~us~on equation (for example, when considering diffusion of innovation or disease).
5A problem can arise with bipartite graphs: pa does not exist since the chain oscillates between the two sets of
vertices (period = 2). Probabilists deal with this by a simple Lick: divide N by 2 and add a selfloop of probability
1/2 toeveryvertex: N'=I/2+N/2 The eigenvaluesofN' are then: 1=v'o ~v',<...
We can write % as: `~  ~ so that nonzero elements of A become A,j //Eij while the O terms
are unaffected, maintaining sparsity. The second term corresponds to the trivial eigenvector which
can be dealt with separately. In matrix notation % = D"2A D72 which has eigenpairs (v;, Dent) Thus
we have
(is) %2 = ~j=i V'2 l Id ant (omitting the vo =1 expected term for n; = 1)
This equation shows how much each dimension contributes to x2 which is a measure of dependence
between rows and columns. In this interpretation, if vl is small (v, << v0 = 1), then %2 iS also small:
there is no relation between rows and columns of A, and so there is no "signal" above the expected
"background". If vat is close to 1, then x2 win be large and there is a relation between rows and
columns of A, with the first eigenvector pointing in the direction of the maxiTnum variability in %2
If v2 v, ,...vk are also large, we need k+1 eigenvectors to describe the patterns in the %2 matrix.
With (5) we can teD how many eigenvectors we need to explain most ofthe x2 ofthe network.7
Compositions
The Kronecker product of two binary matrices Al and A2 makes a copy of Al for each 1 in A2.
It is weDknow that for two matrices Al and A2 of order me and m2 the eigenpairs ofthe Kronecker
product Al~A2 behave well (West, 1996):
If Al has eigenpairs (`xi al ) and A2 has eigenpairs (,Bj 7bj )' then
Al g)A2 has eigenpairs ({oci x id} ,{ al ~ bj })
It is also weDknown that Al and A2 behave wed under Cartesian sum:
A'
Io
It appears that the behaviour under Kronecker product explains why both the Adjacency and
Normal eigenvectors are good at detecting both on and offdiagonal blocks (clusters of edges).
Visualization
The Laplacian can L provide good visual representations of graphs which are Cartesian products
(such as grids and hypercubes); while N can provide good visual representations of graphs which are
Kronecker products (such as graphs consisting of blocks). The reasons for this are suggested above
and have mostly to do with the behaviour of eigenpairs which are sums and products with 0 ar d 1,
respectively. For graphs that are not kregular, eigenpairs of A do not provide such good
representations since, in general, there is no constant (expected, trivial) eigenvector to combine with.
Another way of describing these results is to consider the relationship between the eigenvector
components for a node and those it is connected to (Scary & Richards, 1999). It is evident from the
definition of eigendecomposition that (where '`u~v" means '`u is connected to v")
(6) al (u) = ~u~v a; (v/
(7) Ij (u) = ~u~vIi (v)/(7jdeg(u))
(~) ni (u) = ~Uvni (v)/(vixdeg(u)) for eigenpair i of N
for eigenpair i of A
for eigenpair i of L,
Note that A has no control for node degree. Consider the effect for "unportant" eigenpa~s:
~ ~ a ~ = k ~ 1, 7~ 0 and ~ vat ~ 1) when elegy) is small, aqua will be folded toward the origin, while 1(u)
and Mu) will sit further away from the origin than its neighbours. This effect makes it difficult to
interpret visual representations based on A, except for kregular graphs where all three spectra are
essentially the same (Fig. 1)
The equation for ni shows that for vi near 1' each node is approximately at the centroidofthose
it is connected to. The exact difference Tom the centroid for node u of eigenvector ni is:
n; (u) ~Uvni (v)/deg(u)) = (~vj)n;
For important eigenvalues v; near 1, this produces very good visualization properties. In addition,
the eigenvector representation may be combined with derived properties such as betweenness
(Freeman, 1979) to produce very helpful displays of large networks (Brandes et al., 2001)
Interpreting the eigenvectors
I. Partitions
Powers (1988) and others have shown how eigenvectors of A can be used to find partitions of
highly connected subsets (clusters) of nodes, but these methods are not as general or as clear as those
derived Tom ~ or N.
The first nontrivial eigenvector 1~ of L is the subject of extensive literature (Lubotzky, 1994;
Alon & Millman, 19854. Fiedler (1975) first suggested that the eigenvector 1, associated with the
secondsmalZest eigenvalue \~ could be used to solve the mincut problem: separate the network into
two approx~nately equal sets of nodes with the fewest number of connections between them, based
on the signs ofthe components of 1~ 8 In fact, more recent derivations of L use the mincut property
as a starting point (Walshaw, et al., 1995) and the results are used to partition computeintensive
problems into subprocesses with miniTnal interprocess communication (Pothen, et al., 1990). This
technique is called Recursive Spectral Bisection (Simon, 1991). Other researchers have used 12, 13
8 Hagen also uses deviations from median
218
DYNAMIC SOCIAL N~TWO~MOD~G ^D ANALYSIS
OCR for page 209
and higher eigenvectors to produce multiway partitions of networks (Hendrickson & Leland, 1995).
The graph bisection problem (Mohar & Poijak., 1991) is to find two nearly equalized subsets
V,,V2 c V such that cut(V~,V2) = I;jj at is rnininiized, where in V,, j ~ V2. (i.e. nodes in V, and V2
have few connections to each other).
This problem is known to be NPhard (Garey & Johnson, ~ 979), but a good approximation is
given by the signs of I~ (Walshaw & Herons, ~ 995~. This gives two sets of nodes of roughly the same
size, but has no control for the number of eciges In each part, and so any clustering of nodes is a side
effect of the partition.
However, we can add an additional constraint that the number of edges in each part also be
roughly equal by weighting the node sets by their total degrees. This is exactly what a partition based
on no Dom N gives us, since it no points In the direction of maximum variability in %2 (Greenacre,
1984).9 Similarly, further partitions based on n2, n3, ...will also produce sets of nodes with a large
number of edges In common (as long as v2, V3 ...make significant contributions to %2). Partitions
based onpositive eigenvalues win produce blocks on the diagonal of A of edges associated with each
set of nodes, while those based on negative eigenvalues produce nearly bipartite offcliagonal blocks
(which occur in pairs if the network is symmetric) (Seamy & Richards, ~ 995~.
2. Clustering
Ideally, the important eigenvectors should be at least bimodal to induce clustering based on sign
partitions, and oRen they are multimodal (Hager, 1 992), suggesting that standard clustering methods
can be used on the coordinates of these vectors. Equations (7) and (~) show that ~ and N place
nodes approximately at the centroids oftheir neighbours. For N. the distances are actuary measured
in %2 space, meaning that nodes with very similar patterns of connections wid be close together
(Benzecri, 1992). This clustering happens with either positive or negative eigenvalues (on or off
diagonal). The latter are important In nearly bipartite networks with few triangles (Fig. 3).
3. Problems
Parkas, et al., (2001) and Goh, et al., (2001) report that the important eigenvectors of A are very
localized on nodes of high degree, and suggest that this effect may be used to distinguish certain
types of networks. This effect does not occur for ~ or N (fig. 2), since each include some control for
degree, and so far no similar results for distinguishing network types have been reported for these
spectra. The biggest problem for ~ and N is their sensitivity to "long paths", especially to pendant
trees attached to the main body of the network (Scary & Richards, 2000~. For N. these may be
interpreted as nodes that are hard to reach (distant) in a random walk. For long paths internal to a
network, this effect is actually an advantage, since these cycles are detected as "locally bipartite" and
emphasized In import ant eigenvectors. Nodes on such paths can have a large effect on global
properties such as diameter (Fig. 34~.
Twomode networks
Twomode networks mix two different kinds of nodes and connections. A simple example is an
affiliation network such as people and the events they attend. We could be interested in finding sets of
people with events in common (or, equivalently, sets of events attended by the same people): this is an
example of coclustenng. Affiliation networks can be represented by bipartite graphs for which A and N
9 see Dhillon (2001 ) for a formal derivation and proof
DYNAMIC SOCIAL N~TWORKAlODE~G kD TRYSTS
219
OCR for page 209
12
are most suited, since they have symmetric spectra for these (the eigenvalues occur in pairs with opposite
sign). Because of this we don't need the entire bipartite matnx: we can work with the rectangular
representation, and infer the missing parts of the eigendecomposition. If we assume ma people and m2
events, the resulting eigenvectors consist of ma components for people followed by m: components for
events. The resulting blocks will be strictly offdiagonal and once again the eigenvectors of N provide a
superior solution by maximizing y2. In fact, this solution is identical to that provided by Correspondence
Analysis' a statistical technique for fig pasterns in 2mode data (Benzecn, 1992).
Partial Iteration
For large networks, it is not necessary or desirable to calculate the entire eigendecomposition.
For very large networks, it may not be possible in terms of tone and space to calculate even a few
eigenpairs. Nevertheless, it is possible to get at a large amount of the global and local network
structure by partially iterating using the Power Method. A few iterations of N = Di A, with each
iteration placing nodes at the means oftheir neighbours, will produce a mixture ofthe most important
eigenvectors. Consider the spectral representation
N Levi) niDn;
We know that (n'DnjT) K =niDn,T for all K, so the contributions of eiaenvectors with small v auicklv
~ . ~ ~ ~ r
crop out as these (vj)~ approach (). This means that N ~ Is dominated by the dimensions with v; near
1. We start with a random vector, and quickly (610 iterations) produce such a mixture. Moody
(2001) describes a procedure in which this process is repeated a number oftimes, each producing a
slightly different mixture of the important eigenvectors (Fig S). The results are then passed to a
standard cluster analysis routine (such as kmeans, Ward's method) to find any clusters of nodes.
Further analysis
The method of partial iteration of N has been used for years in the program NEGOPY (Richards
and Rice, 1981, Richards, 1995), as the first step in a more complex analysis. A key concept in
NEGOPY is that of liaisons. These are nodes which do not have most of their connections with
members of a cohesive cluster of nodes, but rather act as corrections between clusters (Fig 34).
Often it is the liaisons that provide the connections that hold the whole network together. Finding the
liaisons requires detailed knowledge about the members of(potential) clusters and their connections,
and is not an immediate result of a partition based on eigenvectors or clustering methods. Neverthe
less, eigendecomposition methods  fills or partial  are an excellent strategy to begin such analysis.
Future prospects
More work needs to be done on the categorization of networks based on important eigenpairs of
L and N. Recent reports (Koren, et al., 2002; Walshaw, 2000) suggest we might not need to resort
to partial methods after all; we can find important eigenpairs exactly for enormous networks (m> 105)
using "small" amounts oftime and memory by first reducing the network in some way by sampling,
solving the reduced eigenproblem, then interpolating back up with a very good "first guess" for the
Power Method. Preliminary tests show that this should work equally well for Lanczos iteration.
220
DYNAMIC SOCIAL NETWORK MODEL~G ED ISIS
OCR for page 209
13
References
Aho, A., Hopcroft, J., & Ullman, J. (1997~. Data structures and algorithms AddisonWesley.
Alon, N. and Millman, V. (1985~. X,, Isopenmetric Inequalities for Graphs, and Superconcentrators.
J. Comb. Theory B. 38: 7388.
Barnard S. and Simon, H. (1994~. Fast Implementation of Recursive Spectral Bisection for Partitionin,
Unstructured Problems. Concurrency: Practice and Experience 6~24: 101117.
Benzecri, JP. (1992~. Correspondence Analysis Handbook, Marcel Dekker Inc.
Bien, F. (1989~. Constructions of Telephone Networks by Group Representations.
Notices of the Am. Math. So c. 36: 522.
Biggs, N. (1993~. Algebraic Graph Theory. Cambridge University Press.
Brandes, U., Cornelsen, S. (2001~. Visual Ranking of Link Structures, Lecture notes in Computer Science,
2125, SpringerVerlag.
Chavel, I. (1984~. Eigenvalues in Riemanniarz Geometry, Academic Press.
Chow, T.Y. (1997~. The Qspectrum and sparring trees of tensor products of bipartite graphs", Pro c. film.
Math. Soc., 125(1 1): 31553161.
Courant, R. and Hilbert, D. (1966~. Methods of Mathematical Physics. Interscience Publishers.
Cvetkovic, D., Doob, M., and Sachs, H. (1995~. Spectra of Graphs. Academic Press.
Cvetkovic, D., Rowlinson, P. (1990). The largest eigenvalue of a graph: a survey. Linear and Multilinear
Algebra. 28: 333.
Cvetkovic, D., Doob, M. Gull an, I, Torgasev, A. (1988~. Recent results in the theory of graph spectra,
North Holland.
C hung, F.K.R.~1988~. Diameters and Eigenvalues, J. Am. Math. Soc. 2~2~: 187196.
Chung, F.K.R, Faber, V.& Manteuffel, T.A. (1994~. An upper bound in the diameter of a graph Dom
eigenvalues associated with its Laplacian, SIAM J. Disc. Math. 7~3~: 443~57.
ChuIlg, F.K.R. (1995~. Spectral Graph Theory, CBMS Lecture Notes, AMS Publication.
Dhillon, I.S. (2001~. Coclustering documents and words using bapartite spectral graph partitioning,
UT CS Technical Report TAR 200105.
Diaconis, P. and Stroock, D. (1991~. Geometric Bounds for Eigenvalues of Markov Chains,
Ann. Appl. Prob. 1: 3661.
Dodziuk, J. and Kendall, W. S. (1985~. Combinatorial Laplacians and Isoperimetric Inequality.
In K. D. Ellworthy (eddy. From Local Times to Global Geometry, Pitman Research Notes in
Mathematics Series 150: 6874
Parkas, I., Derenyi, I., Barabasi, AL., Viscek, T. (2001~. Spectra of "Realworld" graphs: beyond the
semicircle law, condmaV100235
Fiedler, M. (1973~. Algebraic Connectivity of Graphs, Czech. Math. J. 23: 298305.
Fisher, M. (1966~. On hearing the shape of a drum, J. Combin. Theory, 1: 105125.
Freeman, L. (1979~. Centrality in social networks: Conceptual clarification, Social Networks, 1: 215239.
Freidman, J. (1993~. Some Geometrical Aspects of Graphs and their Eigenfi~nctions. Duke Mathematical
Journal, 69: 487525.
Garey, M., Johnson, D. (1979~. Computers and Intractability. W. H. Freeman.
Goh, KI., Kahng, B. Kiln, D. (2001~. Spectra and eigenvectors of scalefree networks, condmaV0103337.
Greenacre, M., (1984~. Theory and Application of Correspondence Analysis. Academic Press.
Grone, R., Merris, R. and Sunder, V. (1990~. The Laplacian Spectrum of a Graph. SIAM J. Matrix Anal.
App. 11~2~: 218238.
Grone, R. and Merris, R (1994~. The Laplacian Spectrum of a Graph II. SIAM J. Discrete Math. 7~2~:
22 1229.
Hagen, L. (1992~. New Spectral Methods for Ratio Cut Partitioning and Clustenng, IEEE Trans. CAD,
1 1 (9): 1 0741 085.
Hall, K. (1970). itdimensional Quadratic Placement Algorithm, Management Sci 17: 219229.
DYNAMIC SOCIAL NETWORK MODELING AND ANALYSIS
221
OCR for page 209
HararY. F. & Schwenk A (1979~) The. cn~rtr~1 ~nnr^~h try Amt=~;~;~ +~ ~,.~_ ~¢,..~7,
Paciific J. Math. 80: 443449.
__7 __ ~~ I. ~ vex ^ ~~z · ~110~= LIl~ llUlllQ~1 Q1 Wait ~ a graph,
Hendrickson,B. and Leland, R. (1995). An improved spectral graph partitioning algorithm for mapping
parallel computations. SIAM J. Comput. Sci. 16(2): 452~69.
Hotelling' H. (1933). Analysis of a complex of statistical variables into pricipal components, J. Educ.
Psych ol. 24: 417~41, 498520.
Joliffe,I.T..(1986). Prir~cipalComponents Analysis, SpringerVerlag,New York.
.
Kirchoff, G. (1847). Uber die Auflosung der Gleichungen.... Ann. Phys.Chem, 72: 497508.
Koren, Y. Carrnel, L., Harel, D. (2002). ACE: a fast multiscale eigenvector computation for huge graphs,
IEEE Symposium On Information Visualization.
Lawler, G. F. and Sokal, A. D. (1988). Bounds on the L2 Spectrum for Markov Chains and Markov
Processes: a Generalization of Cheegerts Ibequality, Trans. Amer. Math. Soc. 309: 557580.
Lovasz, L. (1995~. Random Walks, Eigenvalues and Resistance, Handbook of Combinatorics, Elsevier,
17401745.
Lubotzly, A. (1994). Discrete Groups, Expanding Graphs, and Invariant Measures, Birkhauser.
Mohar, B. (1991). The Laplacian Specimen of Graphs. In Alavi, Chartrand, Ollermann and Schwenk
(eds . ). Graph Th eory Combinatorics and Applications, Wiley, 871 898.
Mohar, B., Poijak S. (1991). Eigenvalues and the Maxcut Problem, Czech. Math. J. 40: 343352.
Moody, J. (2001). Peer influence groups: identifying dense clusters in large networks, Soicla Networks;, 23:
261283.
Parlett, B. (1980). The Symmetric Eigenvalue Problem, PrenticeHall.
Parlett, B., Simon, H. and Stringer, L. (1982~. On Estimating the Largest Eigenvalue with the Lanczos
AIgorithrn. Mathematics of Computation. 38~157): 153165.
Pothen, A., Simon, H. and Liou, KP., (1990). Partitorung Sparse Matrices with Eigenvalues of Graphs.
SIAM J. Matrix Anal. App. 11~3): 430452.
Powers, D. (1988). Graph partitioning by eigenvectors. Linear Algebra Appl. 101: 121133.
Richards, W.D. & Seary, A.J. (2000~. Eigen Analysis of Networks, J. Social Structure.
http://www.heinzcmu.edu/projecVlNSNAJjoss/index1 .html
Richards, W.D. and Rice, R. (1981~. The NEGOPY Network Analysis Program, Social Networks, 3~3~:
21~224.
Richards, W.D.~1995~. NEGOPY4.30Manual and user's Guide. School of Communication, SFU.
hop :llww~. sfu. ca/ri chards/PdfZi pPi I es/negm a n98. pdf
Seary, A.J. & Richards, W.D. (2000). Negative eigenvectors, long paths and p*, Paper presented at
INSNA XX, Vancouver BC. http://www.stu.ca/richards/Pages/longpaths.pdf
Seary, A.J. and Richards, W.D. (1998). Some Spectral Facts. Presented at INSNA XIX, Charleston.
http://www.sfu.ca/richardslPagesispecfact. pdf
Seary, A.J. and Richards, W.D. (1995). Partitioning Networks by Eigenvectors. Presented to European
Network Conference, London. Published in Everett, M.G. and Rennolls, K. (eds). (1996~. Proceedings
of the International Conference on Social Networks, Volume 1: Methodology. 4758.
Simon,H. (1991~. Pariitoning of Unstructured Problems for Parallel Processing. Computing Systems in
Eng. 2(2/3~: 135148.
Sinclair, A. (1993~. Algorithmsfor Random Generation and Counting, Birkhauser
Walshaw, C. (2000~. A multilevel algorithm for forcedirected graph drawing, Proc. Graph Drawing 2000,
Lecture Notes in Computer Science, 1984: 171182.
Walshaw, C. and Begins, M. (1995~. Dynamic Loadbalancing for PDE Solvers on Adaptive Unstructured
Meshes, Concurrency: Practice and Experience. 7(1~: 1728.
Wasserman, S. & Faust, K. (1994~. Social Network Analysis, Cambridge
West, D. B. (1996~. Introduction to graph theory, Prentice Hall, 1996.
222
DYNAMIC SOCKS NETWO~MODE~WG ED TRYSTS
OCR for page 209
Cartesian sum
Kronecker product
Ha,
ED ———
i_
a) Using the eigenvectors of A. With no 'trivial" vector to combine wig, the displays are distorted.
by Eigenvectors of L behave well under
Cartesian sum. Coordinates of a 1D
path are replicated as straight lines.
::C~ 9
=}~a
_=~==_
tic
Eigenvectors of L do not behave well under Kronec
ker product.
c) Using eigenvectors of N. Lines are slightly
curved, so the cube is slightly distorted.
Largest negative eigenvector of N captures
bipartiteness perfectly.
Figure I. Six views of a 6 x 8 x 10 grid. Left: as the Cartesian sum of three paths producing a three
dimensional grid. Right: as the Kronecker product producing a bipartite graph
DYNAMIC SOCIAL N~TWO~MODEL~G ED ^^YSIS
223
OCR for page 209
224
16
.
·.N
In.
a) The first 3 eigenvectors of N produce clusters of
nodes. These are labelled with the resulting
partition into 4 blocks, along with the liaisons. The
central liaison is of high degree and holds the
network together. The figure is slightly rotated to
show the clusters.
Figure 2. Two Mews of a small social network with 145 nodes. (Data Source: L. Koehly)
i'
~.~
. ~4, ~
Phi
ToiFrom

c~ .
,., j
!
i.
Fit
. ~
b) Adjacency matrix permuted by partition numbers. The
blocks have no interconnections, and the network is held
together by the liaisons (right and bottom).
i:,)
a..'
I,
.,~

· ·Yl.
· 1;;_ :,
. ebb
Ma · ~
Rae ~ .. I,
' ala I,
s art ·.
. Id. .
('i
_,
', Fir'
i. ~~~ · i
i · ~ Me  ' .
i na · ·=s~_ .
.', · st _ I'm 2.
a) The first 3 eigenvectors of N. The 3rd eigenvalue b) Adjacency matrix permuted by partition numbers. In this
iS negative, since there are very few triangles in the case the liasons are of low degree.
rightmost cluster. The labelling is by ethnic group,
which shows a close relation to structure. There are
only two connections between the two main ethnic
groups.
Figure 3. Two views of a small social network of drug users with 114 nodes. (Data Source: Scott Clair)
D YNAMIC SOCIAL NETWORK MODELING AND ANAL YSIS
OCR for page 209
17
1.S
1
~3
a
05
i\
.....
.. an .....
1 :=
a. This figure shows the close relation between the
first 3 eigenvectors of N and four needle exchange
sites, which are used to label the nodes. The figure is
rotated to make this clear.
1
b. This figure is like an Adjacency matrix, except
links are located by the coordinates of the first
eigenvector of N. The network is dominated by
exchanges within sites E and W.
Figure 4. Two views of a needle exchange network. (Data Source: T. Valente and R. Foreman). This
network is moderately large (N=2736) and roughly scalefree (k~1.7~. The eigenvectors of A are dominated
by nodes of high degree (>100~.
a) Closeup of clusters formed by the first 3 b) Clusters formed by placing each node at the centroid of its
eigenvectors of N. labelled by construction. neighbours, iterating 8 times with 3 random starts (multiple
partial iteration). Labelled by construction.
Figure 5. A moderately large (N=20,000) artificial network (Data Source: J. Moody) constructed for
testing purposes. The network was constructed Tom tightly connected groups of 50 nodes, each group then
loosely connected in sets of 8, with 400 of these even more loosely connected into a single component.
DYNAMIC SOCIAL NETWORK MODELING AND ANALYSIS
225
OCR for page 209
 1 8
APPEND]X (Glossary, examples, facts and definitions)
adjacency matrix: A network (graph) may be represented by a matrix of zeros and ones, with a one
indicating that two nodes are connected (adjacent), and a zero otherwise. In a weighted graph, the ones may
be replaced by other positive numbers (e.g., a distance or cost).
A sample adjacency matrix is shown below. See link list
a b c d e f g h
a 0 1 1 1 0 0 0 0
b 1 0 1 1 0 0 0 0
c 1 1 0 1 0 0 0 0
d 1 1 1 0 0 0 0 0
e 0 0 0 0 0 0
f 0 0 0 0 0 0 1 1
g 0 0 0 0 1 1 0 0
h 0 0 0 0 1 1 0 0
a has connections to b,c,d
b has connections to a,c,d
. .etc...
Adjacency specimen: The adjacency matrix of a graph, like any matrix, may be subject to an eigen
decomposition. In graph theory, the resulting set of eigenvalues is referred to as the graph spectrum, in
analogy to the continuous spectrum from continuous spectral analysis methods such as Fourier analysis. In
Fourier analysis, the spectrum is understood to refer to the weighting of sines and cosines, whereas the
discrete graph spectrum (eigenvalues) are weights of eigenvectors with unknown functional form. We
sometimes use the term eigenpa~r refer to both eigenvalues and eigenvectors. Since there are other spectra
associated with graphs, we refer to this one as the Adjacency or Standard spectrum
block: A block may be contrasted with a clique in the sense that the former are defined as sets of nodes that
have similar patterns of links to nodes in other sets, while the latter is a set of nodes that have most of their
links to other nodes in their set. All cliques are blocs, but some blocks are not cliques. One of the aims of
blockrnodelling is to identify roles by clustering the nodes so that those with similar patterns of connections
are next to one another in the matrix. The members of each block perform similar roles in the network.
block model: a higherlevel description of a network, where roles (or blocks) are represented by a
simplified graph. For the matrix above, a block model would be: 1 0 0
O 0 1
0 1 0
Cartesian sum: A form of graph composition, which fonns more complex graphs from simpler ones.
Cartesian sum may be expressed in terms of Kronecker product as:
A~A2  A, g'l2 ~ AT (where I, and I~ are identity matrices of appropriate size)
As an example, the Cartesian sum of two paths is a rectangular and.
clique: In graph theory, a clique is a subgraph in which all nodes are connected to each other. In social
networks, a clique is a set of nodes with most of their connections with other members of the clique. This
would generally correspond to an informal role (e.g. ffiendship). In the above matrix {a,b,c,d} form a
clique.
cluster: A collection of points that are "close" to each other in some sense. Many definitions (and related
techniques) are available. For networks, we should also insist that the points share connections, either
within the cluster (clique) or with another cluster (see block model).
component: If a graph is connected, it consists of a single component. A disconnected graph does not have
a path between any pair of nodes, and may consist of several components.
connected: If there is a path between every pair of nodes in a graph, the graph is said to be connected. A
disconnected graph does not have a path between any pair of nodes, and so distances (and diameters)
cannot be defined, except within each component.
226
DYNAMIC SOCIAL NETWORK MODELING MID ANALYSIS
OCR for page 209
19
distance: For graphs, the distance between nodes is defined as the smallest number of links connecting
them. Also called geodesic distance.
diameter: the largest geodesic distance between any pair of nodes in a graph
gap: The term gap or spectral gap refers to large distances in the spectrum of eigenvalues, particularly
between 0 and the secondsmalIest (Laplacian) or between 1 and the secondlargest in absolute value
(Normal). A small gap means that a graph can be disconnected with few edgecuts; a large gap means there
are many paths between sets of nodes.
global vs local methods: In graph theory, a local method is one that examines only a few neighbours of a
node. A global method is one which examines the entire graph, such as an eigendecomposition.
Kronecker product: A form of graph composition, which forms more complex graphs from simpler ones.
An example of the Kronecker product is:
1 1 1 0 1 0 1 0
1 0 0 1 = 0 1 0
0 1 0 0
where every I in the first matrix has been replaced by a complete copy of the second matrix. In this
example the first matrix is a block model, not a graph.
Lanczes iteration: a generalization of the power method which allows calculation of a specified number of
eigenpairs without loss of precision or orthogonality. Currently one of the best methods for
eigendecomposition of large systems.
Laplacian specimen: The eigenvalues (and eigenvectors) of a matrix formed by subtracting the adjacency
matrix from a diagonal matrix of node degrees. The eigenvalues are nonnegative, with a "trivial" (constant)
eigenvector of eigenvalue 0. This discrete analogue of the continuous Laplacian shares a great many of its
important properties. For this reason, it has become the focus of much research in the last decade.
liaisons: according to NEGOPY, these come in two types. Direct liaisons are individuals who have most of
Weir interaction with members of groups, but not with members of any one group. They provide direct
connections between the groups they are connected to. Indirect liaisons are individuals who do not have
most of their interaction with members of groups. They provide indirect or 'multistep' connections between
groups by connecting Direct Liaisons, who have direct connections with members of groups (Richards,
1995~.
link: A pair of nodes with some connection between them. In graph theory, links are also called edges or
lines. In social networks, links are often called ties.
link list: A sparse format for stonng ~nfonnation in a network. Only the pairs of nodes that are connected
are in the link list. For symmetric graphs, only one pair is needed for each link. For weighted graphs, a third
column may be used to hold the weights. For the symmetric adjacency matrix shown above, the link list is:
1 2
~ 3
1 4
2 3
...and so on...
localized: As applied to an eigenvector means that most of the coordinates are near zero, and only a few
have large values. Coordinates may be either positive or negative, and the eigenvectors are normalized to
DYNAMIC SOCKS NETWO=MODEL~G ED TRYSTS
227
OCR for page 209
228
20
make the sum of squares of components 1, so the sum of 4th powers is generally used as a measure of
localization. If this sum is near 1 only a small number of coordinates are important. If it is near 1/m, then
all nodes contribute to the eigenvector.
neigbourhooti: all the nodes which are connected to a given node. May be extended to all nodes connected
to a set of nodes, but not including the original set.
RECOPY: (NEGative entrOPY) (Richards and Rice, 1981, Richards, 1995) is a computer program
desired to find clique structures. It uses a random starting vector, and multiplies it by the rownormalised
adjacency matrix, subtracting off row means. Usually 68 such iterations are performed, resulting in a
vector which is a mixture of the the important Normal eigenvectors (Richards and Seary, 1997~. This
vector is then scanned for potential clique structures, which are tested against the original network and for
some statistical properties (e.g., variance in the node degrees). Sparse matrix methods are used throughout,
allowing large networks to be analysed rapidly.
node: An object that may have some kind of connection (link) to another object. In some cases, nodes are
people, organizations, companies, countries, etc. In graph theory nodes are also called vertices and points.
In social networks, nodes are often called actors.
normal spectrum: The eigenvalues (and eigenvectors) of a rownormalised adjacency matnx. This matrix
is rowstochastic, and similar to a symmetric matrix, so its eigenvalues are real and less than or equal to 1
in absolute value. It is closely related to the Laplacian (indeed, it may be defined to be the Laplacian in the
%2 metric defined by the node degrees).
partition: A partition of a graph is a division of the nodes into a collection of nonempty mutually exclusive
sets. A partition of the adjacency matrix shown above could be: {a,b,c,d), (e,f,g,h), so that there are no
links between the nodes in each part of the partition.
sparse matrix techniques: In analysis of networks with more than 50 or 60 members, it is usually the case
that each node is connected to only a fraction of the others. The adjacency matrix for such networks contain
mostly zeroes, which indicates the absence of links. In these situations, it far is easier to work with a list of
the links (link list) that are present, rather than the whole matrix which contains many times more numbers.
Any array (such as an adjacency matrix) which consists mostly of some default number (usually zero) may
be treated as a sparse matrix. Since this value is known, it does not need to be stored as part of the array.
This allows the array to be stored in a much more efficient manner, e.g., for an adjacency matrix, we only
need to store the links (pairs of nodes) when they exist. For a weighted adjacency matrix, we also need to
store the values of the weights, one for each link Many matrix operations (e.g., multiplying a matrix by a
vector) can utilize this more efficient storage to run much faster as well. Sparse matrix techniques are
those which avoid any manipulation of the matrix that would affect the sparseness property (e.g., taking the
inverse will generally do this, as will correlating each row or column with all the others). It is quite
possible to find eigenvalues and eigenvectors using sparse techniques.
spectral analysis or methods: Loosely speaking, another term for eigendecomposition. Mathematically
speaking, a general tea refemug to any restatement of some fimction in terms of a set of basis functions
(e.g. sines and cosines for Founer analysis). The spectrum is the weights of these basis Unctions. The
Fourier transform is especially useful in mathematical physics since the sines and cosines (or eZ for
complex z) are eigenfimctions of the ubiquitous derivative and integral operators. The terms fimction,
operator and eigenfi~nction have the discrete analogues of vector, matrix and eigenvector.
Standard spectrum: see Adjacency spectrum
DYNAMIC SOCIAL fJETWORKAdODEL~G ED TRYSTS