Duality Between Temporal Networks and Signals
Duality Between Temporal Networks and Signals
Abstract
We develop a framework to track the structure of temporal networks with a signal processing approach. The
method is based on the duality between networks and signals using a multidimensional scaling technique. This
enables a study of the network structure using frequency patterns of the corresponding signals. An extension is
proposed for temporal networks, thereby enabling a tracking of the network structure over time. A method to
automatically extract the most significant frequency patterns and their activation coefficients over time is then
introduced, using nonnegative matrix factorization of the temporal spectra. The framework, inspired by audio
decomposition, allows transforming back these frequency patterns into networks, to highlight the evolution of
the underlying structure of the network over time. The effectiveness of the method is first evidenced on a toy
example, prior being used to study a temporal network of face-to-face contacts. The extraction of sub-networks
highlights significant structures decomposed on time intervals.
1 Introduction
Many complex systems, whether physical, biological or social, can be naturally represented as networks, i.e., a set
of relationships between entities. Network science [28] has been widely developed to study such objects, generally
supported by a graph structure, by providing powerful tools, such as the detection of communities [12], in order
to understand the underlying properties of these systems. Recently, connections between signal processing and
network theory have emerged: The field of signal processing over networks [31, 29] has been introduced with the
objective of transposing concepts developed in classical signal processing, such as Fourier transform or wavelets,
in the graph domain. These works have led to significant results, among them filtering of signals defined over a
network [31] or multiscale community mining using graph wavelets [33]. These approaches benefit from the natural
representation of networks by graphs, enabling the use of the comprehensive mathematical understanding of graph
theory. Another approach has been also considered, defining a duality between graphs and signals: methods have
been developed to transform graphs into signals and conversely, in order to take advantage of both signal processing
and graph theory. Hence, mapping a graph into time series has been performed using random walks [34, 3, 15]
or deterministic methods based on classical multidimensional scaling [23, 30]. This latter approach has been the
topic of several extensions in [22], in order to build a comprehensive framework to link frequency patterns of the
so-obtained signals with network structures.
Studies mainly focused on the analysis of static networks, potentially aggregated over time, but without time
evolution. However, the considered systems are most of the time not frozen: vertices and edges appear and disappear
over the course of time. Aggregating the networks over a time interval gives insight of the underlying mechanisms,
but often does not provide the actual dynamic sequence of appearance and disappearance of edges: two edges, active
one after the other, will be considered simultaneous in the temporally-aggregated network. Given the importance
of knowing such dynamics, for instance in topics such as epidemic spread or communication networks, and thanks
to the recent availability of many data sets, a temporal network theory has recently appeared [24, 4], enabling a
deeper understanding of underlying mechanisms. Several studies proposed an extension of the methods developed
for static networks to the temporal case: we can cite for instance works on network modeling [2, 7, 16, 35], detection
of communities [27, 37, 14, 13, 5], detection of temporal motifs [25], visualization [36], or more generally data mining
of time-evolving sequences [38].
We propose in this article a new approach based on the duality between networks and signals to study temporal
networks, introduced in [22]. The objective here is to follow the global structure of the network over time. One
first contribution consists of an extension of the existing work to the temporal case, which is naturally performed
1
by considering each time instant as a static network. This approach enables us to visually track temporal networks
by following the frequency patterns associated to specific structures. The second contribution consists of using
nonnegative matrix factorization (NMF) [26] to automatically extract and track the significant frequency patterns
over time. Details about the reconstruction how components can be transformed back into networks are also
provided.
Preliminary versions of this work have been presented in different places: the principle of extension of the
method to the temporal case is introduced in [19], as well as the visual tracking of frequency patterns representing
structures. In [18, 17], this approach has been used to study a bike sharing system in Lyon, which is not mentioned
in this article. Finally, the idea of the decomposition using NMF has been suggested in [20]. This paper extends
however those previous works by detailing a comprehensive framework and setting out consistent arguments for the
validity of the method in the context of analysis of temporal networks.
The paper is organized as follows. Section 2 recalls the duality between static networks and signals, as introduced
in [30] and extended in [22]. Section 3 gives an extension of the transformation to the temporal case, and introduces
a toy temporal network highlighting specific network structures. Section 4 shows how spectral analysis is used to
track the structure of the temporal network over time, while Section 5 describes a decomposition of a temporal
network into sub-networks by applying nonnegative matrix factorization to the matrix of spectra over time. For
both sections, an illustration with the toy temporal network is provided. Finally, Section 6 applies the developed
framework to a real-world temporal network from the SocioPatterns project, describing face-to-face contact between
children in a primary school.
Notations Matrices are denoted by boldface capital letters, their columns by boldface lowercase letters, and
their elements by lowercase letters: for M ∈ RA×B , M = [m1 , . . . , mB ] = (mab )a∈{1,...,A},b∈{1,...,B} . Tensors are
represented by calligraphic capital letters: for T ∈ RA×B×T , T = [M (t) ]t∈{1,...,T } . Operators are represented by
calligraphic boldface capital letters: F .
As discussed in [22], we choose w = 1 + N1 . This definition focuses on the presence (denoted by a distance equal
to 1) or the absence (denoted by a distance equal to w) of an edge between two vertices. Hence, the distance of
two vertices in the graph, often defined as the length of the shortest path between the two vertices, has no direct
impact on the matrix ∆: two pairs of unlinked vertices will have a distance equal to w, whether they are close or
not in the graph. Applying CMDS on the distance matrix ∆ leads to a collection of points, corresponding to the
vertices, in a Euclidean space RN −1 . The considered signals, or components, correspond to the coordinates, for
each dimension, of the points. The collection of signals is denoted by X ∈ RN ×C , where C is the total number of
components, and thus is equal to N − 1 for the full representation. The columns xc represent the c-th signal, with
c ∈ {1, . . . C}, and are indexed by the vertices. The transformation is denoted by T : T [A] = X.
2
2.2 Inverse Transformation
Transforming back signals to a graph has to take into account the nature of the signals, as they are a representation
of a specific network. The inverse transformation must hence preserve the original topology of the underlying graph.
By construction of the collection of signals X, the perfect retrieval of the underlying graph is easily reachable, by
considering the distances between each pair of point: As built using CMDS, these distances represent the distance
matrix ∆, and the adjacency matrix of the graph is directly obtained. However, when X is degraded or modified,
e.g. by taking C < N − 1, the distances are no longer directly the ones computed between vertices, even if they
stay in the neighborhood of these distances. We proposed in [22] to take into account the energy of components
to improve the reconstruction, as well as prior information about the original graph. If the distance between two
vertices i and j in a high-energy component is high, it means that the two vertices are likely to be distant in the
graph. Conversely, if the distance in a high-energy component is low, then the two vertices are likely to be connected
in the graph.
Let X̃ be a degraded collection of signals. The energies are normalized according to the energies of the original
components, by multiplying the components x̃c by the normalization factor Nc :
v
u PN
u x2nc
Nc = t Pn=1
N
(2)
2
n=1 x̃nc
sc = F [xc ] (4)
estimated, for positive frequencies, on F = N2 + 1 bins, F being the Fourier transform and c ∈ {1, . . . , C}.
From the spectrum S, the following features are obtained for each frequency of each component:
• the magnitudes M , which read as mcf = |scf |
3
Table 1: Generation of the Toy Temporal Network: Probabilities to have an edge at time t
e ∈ Ep e ∈ / Ep
e ∈ Et−1 0.99 0.8
e∈/ Et−1 0.2 0.01
2.5 Illustrations
Figure 1 shows illustrations of the transformation of a graph into a collection of signals, on several instances of
network models. These illustrations show the connection between graph structure and the resulting signals after
transformation. Regular structures (Figures 1a and 1b) highlight harmonic oscillations as components, while an
organization in communities (Figure 1c) presents high-energy frequencies on the first components. Combination of
both structures in the graph (Figure 1d) is preserved in the frequency pattern. Finally, a random graph (Figure 1e)
leads to a similar absence of structure in the corresponding frequency pattern.
As the number of vertices in the graph does not evolve, the number of components C, and then the number of
frequencies F , is constant over time. In the case where the set of vertices evolves, the tensor is built by fixing
the number of components as the maximal number of components over time, and by zero-padding the missing
components. As for the indexation of vertices, the algorithm relabeling vertices according to the structure of the
network is performed at each time step, leading to a different labeling of the vertices over time.1
Conversely, the inverse transformation is performed likewise, by applying the static inverse transformation on
the collection of signals at time t:
where A(t,r) represents the adjacency matrix of the reconstructed temporal network at time t.
The modification of the labeling indicates how the vertex evolves over time in the global structure of the network. This aspect is not
discussed in the following, the labeling over time is kept only for purposes of reconstruction.
4
Component 1 Component 2 40
Frequencies
30
0 20 40 60 80 100 0 20 40 60 80 100 20
Component 3 Component 70
10
0
0 20 40 60 80
0 20 40 60 80 100 0 20 40 60 80 100 Components
Component 1 Component 2 40
Frequencies
30
0 20 40 60 80 100 0 20 40 60 80 100 20
Component 3 Component 70
10
0
0 20 40 60 80
0 20 40 60 80 100 0 20 40 60 80 100 Components
Component 1 Component 2 40
Frequencies
30
0 20 40 60 80 100 0 20 40 60 80 100 20
Component 3 Component 70
10
0
0 20 40 60 80
0 20 40 60 80 100 0 20 40 60 80 100 Components
Component 1 Component 2 40
Frequencies
30
0 20 40 60 80 100 0 20 40 60 80 100 20
Component 3 Component 70
10
0
0 20 40 60 80
0 20 40 60 80 100 0 20 40 60 80 100 Components
Component 1 Component 2 40
Frequencies
30
0 20 40 60 80 100 0 20 40 60 80 100 20
Component 3 Component 70
10
0
0 20 40 60 80
0 20 40 60 80 100 0 20 40 60 80 100 Components
Figure 1: Illustrations on several instances of network models of the transformation of a network into a collection of
signals. All networks have N = 100 vertices. (Left) Two-dimensional representation of the network. (Middle) First
three highest energy components, and the arbitrarily chosen component 70, a low-energy one. (Right) Component-
frequency map obtained after spectral analysis. The color represents the intensity, coded from white to black. The
contrast is emphasized to expose the patterns.
5
0 0 0
20 20 20
40 40 40
60 60 60
80 80 80
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
(a) t = 19 (b) t = 39 (c) t = 59
0 0
20 20
40 40
60 60
80 80
0 20 40 60 80 0 20 40 60 80
(d) t = 79 (e) Agg. adjacency matrix
Figure 2: Adjacency matrix At of the Toy Temporal Network at different time instant.
at the previous time, and on a prescribed network structure. Let Et be the set of edges at time t, E p the set of
edges of a prescribed network structure, and e = (i, j) the edge between the vertices i and j. Table 1 gives the
probability to have e in Et according to the presence of e at the previous time (i.e. e ∈ Et−1 ) and the presence
of e in the prescribed network structure (e ∈ E p ). These probabilities are set up in order to generate a smooth
transition between the initial structure and the prescribed network structure, and such that the structure at the
end of the time interval is close to the one of the prescribed graph.
Four networks are successively chosen as prescribed network structures, among those introduced in Section 2,
each structure being active for a time interval of 20 time steps:
1. Random linkage of vertices (see Figure 1e)
2. Network with 3 communities (see Figure 1c)
3. 4-ring lattice (see Figure 1a)
4. 4-ring lattice with 3 communities (see Figure 1d)
Figure 2 shows the adjacency matrix of the temporal network at the end of each time interval. The prescribed
structure is clearly visible thanks to a proper labeling of vertices: the blocks describe the communities, while the
strong diagonal describes the regular lattice. These structures are nevertheless not completely defined, as noise due
to the probabilities set in Table 1 remains present. It allows to assess the robustness of the further analyses to
noise. Figure 2e shows the adjacency matrix of the temporal network aggregated over the 80 time steps: the four
structures also appear using this representation.
Figure 3 plots three descriptors of the network at each time step, which would be a manner to analyze the tem-
poral network using network theory. The number of edges (Figure 3a), the average clustering coefficient (Figure 3b)
6
1400
1200 1 2 3 4
# of edges
1000
800
600
400
200
0 10 20 30 40 50 60 70
Time
(a) Number of edges
Avg. clustering coeff.
0.5
1 2 3 4
0.4
0.3
0.2
0.1
0 10 20 30 40 50 60 70
Time
(b) Average clustering coefficient
Avg. len. short. path
3.8
1 2 3 4
3.4
3.0
2.6
2.2
1.8
0 10 20 30 40 50 60 70
Time
(c) Average length of shortest paths
Figure 3: Descriptors of the Toy Temporal Network over time. The alternating shaded regions correspond to the
four different periods, whose the label is given at the top of each region.
7
Relative energy 1 2 3 4
1.0 Component 0
Component 1
0.9 Component 2
Component 70
0.8
0 10 20 30 40 50 60 70 80
Time
(a) Averaged over frequencies Ef (c) for c ∈ {0, 1, 2, 70}
Relative energy
1.0 1 2 3 4
Frequency 1
Frequency 2
0.9 Frequency 3
Frequency 35
0.8
0 10 20 30 40 50 60 70 80
Time
(b) Averaged over components Ec (f ) for f ∈ {1, 2, 3, 35}
and the average length of shortest paths (Figure 3c) allow for an identification of the four time intervals: The
number of edges increases in the period 2, to form the random structure, characterized by a low average clustering
coefficient and low average length of shortest paths. Adding communities in period 2 increases the average cluster-
ing coefficient, as expected. On the contrary, the regular structure in period 3 decreases the clustering coefficient
and increases the average length of shortest paths, but much less in proportion than the decrease of the number
of edges would affect. As expected, adding communities in this regular structure, as it is done in period 4, signifi-
cantly increases the average clustering coefficient and slightly decreases the average length of shortest paths. These
network-based descriptors give good intuitions on the underlying structure of the temporal network, but turn out
to be inefficient to exactly characterize the structure. Furthermore, mixture of different structures, as it appears in
this model, are not explicitly revealed.
8
Correlation coefficients
0.5
2 1 2 3 4
0.4 3
4
0.3 5
6
0.2
0.1
0.0
0 10 20 30 40 50 60 70 80
Time
(a) Comparison with a network with k communities, for different values of k. The temporal network is highly correlated with a
graph organized in communities during the period 2 and 4, as expected.
Correlation coefficients
0.30
2 1 2 3 4
0.25 4
0.20 6
8
0.15
0.10
0.05
0.00
0 10 20 30 40 50 60 70 80
Time
(b) k-regular lattice, for different value of k. The temporal network has the highest correlation with a 4-regular lattice during the
period 3.
Figure 5: Correlation between the temporal spectra at each time step and the spectra of networks with two specific
network structures. The correlation is averaged over 20 repetitions.
9
5 Extraction of Frequency Patterns using Nonnegative Matrix Factor-
ization
5.1 Nonnegative Matrix Factorization (NMF)
Nonnegative matrix factorization (NMF) [26] is a linear regression technique, used to decompose a nonnegative
matrix V of dimension C × T , i.e. a matrix whose terms are greater or equal to zero, into the product of two
nonnegative matrices W and H. NMF leads to a reduction of the dimensionality of data, by extracting in the
columns of W patterns characterizing the data, and in the rows of H the activation coefficients of each pattern along
the time. The number of extracted patterns is denoted K. A common approach to achieve such a decomposition
consists of solving an optimization problem:
where D is a dissimilarity measure. Févotte et al. [11] proposed an algorithm to find a solution of the NMF where
D is the β-divergence, a parametric function with a simple parameter β which encompasses the Euclidean distance
(β = 2), the generalized Kullback-Leibler divergence (β = 1) and the Itakura-Saito divergence (β = 0) as special
cases.
Regularization of the activation coefficients can be added in order to smooth them, with the assumption that
there is no abrupt variations in the structure from one time step to the next one. The optimization problem is then
defined as the minimization of the fitting term, defined in Eq. 7, plus a term of temporal regularization:
K X
X T
P (H) = D(hk(t−1) |hkt ) (8)
k=1 t=2
leading to:
where γ controls the regularization and is empirically fixed such that the activation coefficients highlight smoothness.
In [9] and [8], smooth NMF has been introduced for β = 1 and β = 0.
10
Component 1 Component 2 Component 3
40 40 40
Frequencies
Frequencies
Frequencies
30 30 30
20 20 20
10 10 10
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Components Components Components
(a) Frequency patterns, obtained after reshaping the columns of W into matrices.
Component 2
0.8 Component 3
0.6
0.4
0.2
0.0
0 10 20 30 40 50 60 70 80
Time
(b) Activation coefficients, corresponding to the rows of the matrix H, normalized by the maximal value of H
Figure 6: Results of the nonnegative matrix factorization for the Toy Temporal Network, using K = 3 and γ = 5.
component-frequency map can be built by reshaping the vector into a matrix. To highlight how these structures
are arranged in the temporal network, each component is transformed into temporal network: As described in
[10], using NMF with the Itakura-Saito divergence provides means of reconstruction of the collection of signals
corresponding to each component. For each component k, a temporal spectrum S (k) ∈ RC×F ×T is obtained using
(k,t)
a Wiener filtering such that its elements scf read as:
The temporal spectrum of the component k is then a fraction of the original temporal spectrum. From S (k) ,
an inverse Fourier transformation is performed, leading to a collection of signals for each component k denoted
by X (k) ∈ RN ×N ×T . Finally, the adjacency tensor A(k) describing the temporal network corresponding to the
component k is obtained by using the inverse transformation T −1 described in Section 3.
11
0 0 0
20 20 20
40 40 40
60 60 60
80 80 80
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
(a) Component 1 (b) Component 2 (c) Component 3
PT
Figure 7: Adjacency tensor aggregated over time: for each component k, A(k) = t=1 hkt A(k,t) .
four different structures used. The component 1 is active in periods 1, 2 and 4 with an almost constant level, the
component 2 is mainly active in period 3, as well as in periods 1 and 4, and finally, the component 3 is active in
period 1 and in period 2.
Looking at the corresponding frequency patterns in Figure 6a, and more precisely their connections with the
network structures observed in Section 2.5, confirms the good adequacy with the expected results: The component
1 looks like a structure in communities, the component 2 resembles a k-regular structure and the component 3
exhibits random structure. The structure in period 1 is then a mixture between a random structure and a structure
in communities (as it happens, one single community), in period 2 only the structure in communities is present, in
period 3 a regular structure described by component 2 and finally, the fourth period is a mixture between structure
in communities and regular structure. Random structure is present in period 1 and in period 2 inside communities.
Figure 7 shows, for each component k, the aggregated adjacency matrix over time, obtained after the back
transformation of the spectra into a temporal network. The sum is weighted by the activation coefficients given by
the matrix H, in order to highlight visually the most significant patterns:
T
X
A(k) = hkt A(k,t) (12)
t=1
This representation partly confirms the connections between spectra and structures as described above. We can
notice that component 1 displays the three communities present in period 2, as well as communities corresponding
to a regular structure in period 4. The actual communities in period 4 are caught in the component 2, which does
not correspond to a regular structure, even if the diagonal, characterizing the k-regular lattice, is clearly dominant.
Finally, the component 3 looks like a random matrix, as no structure is visible, at least through this representation.
The decomposition of temporal networks using NMF enables to retrieve from the spectra the different structures
composing the temporal network, and to detect when these structures are present, either alone or jointly. We now
propose to apply this framework to study a real-world temporal network describing contacts between individuals
in a primary school.
12
900
800 Class Break Class Lunch Class BreakClass
# of edges
700
600
500
400
300
200
100
9 10 11 12 13 14 15 16
Hour of the day
(a) Number of edges
# of isolated vertices
140
120 Class Break Class Lunch Class BreakClass
100
80
60
40
20
9 10 11 12 13 14 15 16
Hour of the day
(b) Number of isolated vertices
Avg. clustering coeff.
0.35
Class Break Class Lunch Class BreakClass
0.30
0.25
0.20
0.15
0.10
9 10 11 12 13 14 15 16
Hour of the day
(c) Average clustering coefficient
Avg. len. short. path
4.0
3.5 Class Break Class Lunch Class BreakClass
3.0
2.5
2.0
1.5
1.0
9 10 11 12 13 14 15 16
Hour of the day
(d) Average length of shortest paths
Figure 8: Descriptors of the Primary School Temporal Network over time. The shaded regions correspond to
class periods, while white regions correspond to breaks and lunch, according to the information given in [32]. No
significant information are provided by the usual network-based descriptors
13
Relative energy Class Break Class Lunch Class Break Class
Frequency 1
1.00 Frequency 2
Frequency 3
0.98 Frequency 35
9 10 11 12 13 14 15 16
Hour of the day
(a) Averaged over frequencies Ef (c) for c ∈ {0, 1, 2, 70}
Relative energy
9 10 11 12 13 14 15 16
Hour of the day
(b) Averaged over components Ec (f ) for f ∈ {1, 2, 3, 35}
Figure 9: Marginals of the energies of temporal spectra.The energies of the low frequencies and of the first compo-
nents are not equally distributed over time, indicating changes in the global structure of the temporal network.
individuals for 10 minutes. We restrained the analysis to the first day: 226 children and 10 teachers participated in
the experiment, separated in five grades (from 1st grade to 5th grade), themselves separated in two classes.
Figure 8 shows the network-based descriptors used to characterize the temporal network. Figures 8a and 8b
show that the number of edges in the temporal network, as well as the number of isolated vertices, is not constant
over time, reflecting the real-world nature of data: during the lunch break, some children leave the school to have
lunch at home. The network-based descriptors (Figures 8c and Figures 8d) do not provide significant insights about
the structure of the temporal network.
14
Correlation coefficients
0.35
3 Class Break Class Lunch Class Break Class
0.30 6
0.25 9
12
0.20 15
0.15
0.10
0.05
8 9 10 11 12 13 14 15 16
Hour of the day
Figure 10: Correlation between the temporal spectra at each time step and the spectra of a network with a network
with communities. Each line represent the number of communities, averaged over 20 repetitions. The temporal
network is correlated with structure with a large number of communities during class periods, and with structure
with a small number of communities during breaks and lunch.
Component 1 Component 2
100 100
Frequencies
Frequencies
80 80
60 60
40 40
20 20
0 0
0 50 100 150 200 0 50 100 150 200
Components Components
(a) Frequency patterns, obtained after reshaping the columns of W into matrices.
Component 2
0.8
0.6
0.4
0.2
0.0
0 10 20 30 40 50
Hour of the day
(b) Activation coefficients, corresponding to the rows of the matrix H, normalized by the maximal value of H
Figure 11: Results of the nonnegative matrix factorization for the Primary School Temporal Network, using K = 2
and γ = 5. The activation coefficients are consistent with the schedule of the children in the primary school, as
detailed in [32].
15
only two or three classes have breaks at the same time, while the other ones stay in their respective classrooms. As
for lunches, they are taken in two consecutive turns, preserving the structure in classes, with nevertheless a weaker
intensity.
Figure 12 shows different representations of the temporal networks reconstructed from the components. The
left figure shows the aggregated adjacency matrix over time, as described in Section 5.4. The vertices are ordered
according to the classes of the children, from the youngest to the oldest. The middle figure shows the aggregated
network, using the layout provided in [32], after thresholding of edges according to their weights. The color of dots
indicates the grade of the children, while black dots represent the teachers. Finally, the right figure shows the contact
matrix between classes, obtained by counting the number of edges inside and between the classes. A logarithmic
scale is used to enhance the visualization. Figure 12a shows the original temporal network, while Figures 12b and
12c show respectively the component 1 and the component 2. We can easily observe that the component 1 describes
the structures in classes, with higher density of edges inside classes than between classes. Conversely, the component
2 highlights a less structured network pattern, which looks like two communities, separating the youngest classes
from the oldest. Those observations are consistent with the description of lunches mentioned above.
These results highlight the interest of decomposing a temporal network into several sub-networks, which can be
studied independently of one another. Without prior knowledge, the different periods of activity in the primary
school are displayed, and can then guide the analysis of the system by restricting the analysis over several time
intervals.
7 Conclusion
We have proposed a novel method to track the structure of temporal networks over time using the duality between
graphs and signals, as well as classical signal processing techniques, such as spectral analysis and nonnegative matrix
factorization. At each time, the temporal network is represented by a graph which is transformed into a collection of
signals. NMF is used to extract patterns in the energies of spectra of these signals; these patterns, whose activation
coefficients vary over time, represent a specific structure of the underlying network. The effectiveness of the method
has been demonstrated on a toy example, containing three types of structures as well as on a real-world network
describing temporal contacts between children in a primary school.
These results provide insights in the characterization of temporal networks, but also call for further studies: In
particular, it would be interesting to have a deeper look on how the reconstructed temporal networks are embedded
in the original temporal network. Furthermore, it would be worth considering an iterative analysis, by studying the
reconstructed temporal network themselves, using the same process. This could lead to a spatiotemporal multiscale
analysis, permitting an increasingly precise track of the structure of the temporal network.
Acknowledgment
This work is supported by the programs ARC 5 and ARC 6 of the région Rhône-Alpes and the project Vél’Innov
ANR-12-SOIN-0001-02. Cédric Févotte is gratefully acknowledged for discussions and comments.
References
[1] Ingwer Borg and Patrick J. F Groenen. Modern Multidimensional Scaling. Springer Series in Statistics.
Springer, 2005.
[2] D. Braha and Yaneer Bar-Yam. Time-dependent complex networks: Dynamic centrality, dynamic motifs, and
cycles of social interactions. In Thilo Gross and Hiroki Sayama, editors, Adaptive Networks, Understanding
Complex Systems, pages 39–50. Springer Berlin Heidelberg, 2009.
[3] Andriana S. L. O. Campanharo, M. Irmak Sirer, R. Dean Malmgren, Fernando M. Ramos, and Luis A. N.
Amaral. Duality between time series and networks. PloS one, 6(8), 2011.
[4] Arnaud Casteigts, Paola Flocchini, Walter Quattrociocchi, and Nicola Santoro. Time-varying graphs and
dynamic networks. International Journal of Parallel, Emergent and Distributed Systems, 27(5):387–408, 2012.
[5] Yudong Chen, Vikas Kawadia, and Rahul Urgaonkar. Detecting overlapping temporal community structure in
time-evolving networks. arXiv preprint arXiv:1303.7226, 2013.
16
0
1A
50 1B
2A
2B
100 3A
3B
4A
150 4B
5A
5B
200 Teachers
1A 1B 2A 2B 3A 3B 4A 4B 5A 5B hers
ac
0 50 100 150 200 Te
(a) Original temporal network
50 1A
1B
2A
100 2B 101
3A
3B
4A
150
4B
5A
5B 100
200 Teachers
1A 1B 2A 2B 3A 3B 4A 4B 5A 5B hers
ac
0 50 100 150 200 Te
(b) Component 1
50 1A
1B
101
2A
100 2B
3A
3B
4A
150
4B
5A
5B
200 Teachers
1A 1B 2A 2B 3A 3B 4A 4B 5A 5B hers
ac
0 50 100 150 200 Te
(c) Component 2
Figure 12: (Left) Aggregated adjacency matrix over time, weighted by the coefficients of H. (Middle) Network
representation using the layout provided in [32], after thresholding of edges according to their weights. The color of
dots indicates the grade of the children, while black dots represent the teachers. (Right) Grayscale-coded contact
matrix between classes: each entry gives the number of contacts inside and between the classes. A logarithmic scale
is used to enhance the visualization. The component 1 represents the structures in classes, while the component 2
describes the structure during the breaks and lunch.
17
[6] Andrzej Cichocki. Era of big data processing: A new approach via tensor networks and tensor decompositions.
arXiv preprint arXiv:1403.2048, 2014.
[7] Andrea EF Clementi, Angelo Monti, Francesco Pasquale, and Riccardo Silvestri. Information spreading in
stationary markovian evolving graphs. In IEEE International Symposium on Parallel & Distributed Processing,
2009, pages 1–12, 2009.
[8] Slim Essid and Cédric Fevotte. Smooth nonnegative matrix factorization for unsupervised audiovisual document
structuring. IEEE Transactions on Multimedia, 15(2):415–425, February 2013.
[9] Cédric Févotte. Majorization-minimization algorithm for smooth itakura-saito nonnegative matrix factoriza-
tion. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pages
1980–1983. IEEE, 2011.
[10] Cédric Févotte, Nancy Bertin, and Jean-Louis Durrieu. Nonnegative matrix factorization with the itakura-saito
divergence: With application to music analysis. Neural computation, 21(3):793–830, 2009.
[11] Cédric Févotte and Jérôme Idier. Algorithms for nonnegative matrix factorization with the beta-divergence.
Neural Computation, 23(9):2421–2456, June 2011.
[18] Ronan Hamon, Pierre Borgnat, Patrick Flandrin, and Céline Robardet. Tracking of a dynamic graph using a
signal theory approach : application to the study of a bike sharing system. In ECCS’13, page 101, Barcelona,
Spain, 2013.
[19] Ronan Hamon, Pierre Borgnat, Patrick Flandrin, and Céline Robardet. Transformation de graphes dynamiques
en signaux non stationnaires. In Colloque GRETSI 2013, page 251, Brest, France, 2013.
[20] Ronan Hamon, Pierre Borgnat, Patrick Flandrin, and Céline Robardet. Nonnegative matrix factorization
to find features in temporal networks. In IEEE International Conference on Acoustics Speech and Signal
Processing (ICASSP), pages 1065–1069, Florence, Italy, 2014.
[21] Ronan Hamon, Pierre Borgnat, Patrick Flandrin, and Céline Robardet. Discovering the structure of complex
networks by minimizing cyclic bandwidth sum. Preprint arXiv:1410.6108, 2015.
[22] Ronan Hamon, Pierre Borgnat, Patrick Flandrin, and Céline Robardet. From graphs to signals and back:
Identification of graph structures using spectral analysis. Preprint arXiv:1502.04697, 2015.
[23] Yuta Haraguchi, Yutaka Shimada, Tohru Ikeguchi, and Kazuyuki Aihara. Transformation from complex net-
works to time series using classical multidimensional scaling. In Artificial Neural Networks–ICANN 2009, pages
325–334. Springer, 2009.
[24] Petter Holme and Jari Saramäki. Temporal networks. Physics reports, 519(3):97–125, 2012.
18
[25] Lauri Kovanen, Márton Karsai, Kimmo Kaski, János Kertész, and Jari Saramäki. Temporal motifs in time-
dependent networks. Journal of Statistical Mechanics: Theory and Experiment, 2011(11):P11005, November
2011.
[26] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by non-negative matrix factorization.
Nature, 401(6755):788–791, October 1999.
[27] Peter J. Mucha, Thomas Richardson, Kevin Macon, Mason A. Porter, and Jukka-Pekka Onnela. Community
structure in time-dependent, multiscale, and multiplex networks. Science, 328(5980):876–878, May 2010.
[28] Mark Newman. Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA, 2010.
[29] A. Sandryhaila and J.M.F. Moura. Discrete signal processing on graphs: Frequency analysis. Signal Processing,
IEEE Transactions on, 62(12):3042–3054, June 2014.
[30] Yutaka Shimada, Tohru Ikeguchi, and Takaomi Shigehara. From networks to time series. Phys. Rev. Lett.,
109(15):158701, 2012.
[31] David I. Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging
field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular
domains. IEEE Signal Processing Magazine, 30(3):83–98, 2013.
[32] Juliette Stehlé, Nicolas Voirin, Alain Barrat, Ciro Cattuto, Lorenzo Isella, Jean-François Pinton, Marco Quag-
giotto, Wouter Van den Broeck, Corinne Régis, Bruno Lina, and Philippe Vanhems. High-resolution measure-
ments of face-to-face contact patterns in a primary school. PLoS ONE, 6(8):e23176, August 2011.
[33] Nicolas Tremblay and Pierre Borgnat. Graph wavelets for multiscale community mining. IEEE Transactions
on Signal Processing, 62(20):5227–5239, 2014.
[34] Tongfeng Weng, Yi Zhao, Michael Small, and Defeng (David) Huang. Time-series analysis of networks: Ex-
ploring the structure with random walks. Physical Review E, 90(2):022804, 2014.
[35] Kevin S. Xu and Alfred O. Hero III. Dynamic stochastic blockmodels: Statistical models for time-evolving
networks. In Ariel M. Greenberg, William G. Kennedy, and Nathan D. Bos, editors, Social Computing,
Behavioral-Cultural Modeling and Prediction, volume 7812 of Lecture Notes in Computer Science, pages 201–
210. Springer Berlin Heidelberg, 2013.
[36] Kevin S. Xu, Mark Kliger, and Alfred O. Hero. A regularized graph layout framework for dynamic network
visualization. Data Mining and Knowledge Discovery, 27(1):84–116, July 2013.
[37] Kevin S. Xu, Mark Kliger, and Alfred O. Hero III. Tracking communities in dynamic social networks. In Social
Computing, Behavioral-Cultural Modeling and Prediction, pages 219–226. Springer, 2011.
[38] B.-K. Yi, Nikolaos D. Sidiropoulos, Theodore Johnson, H. V. Jagadish, Christos Faloutsos, and Alexandros
Biliris. Online data mining for co-evolving time sequences. In Data Engineering, 2000. Proceedings. 16th
International Conference on, pages 13–22. IEEE, 2000.
[39] Zhongyuan Zhang, Chris Ding, Tao Li, and Xiangsun Zhang. Binary matrix factorization with applications.
In Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, pages 391–400. IEEE, 2007.
19