Overview

This tutorial introduces a hybrid method that combines intraindividual variability methods and network analysis methods in order to model individuals as high-dimensional dynamic systems. This hybrid method is designed and tested to quantify the extent of interaction in a high-dimensional multivariate system, and applicable on experience sampling data.

This turorial corresponds to the following paper (under review): Yang, X., Ram, N., Gest, S. D., Lydon, D. M., Conroy, D. E., Pincus, A. L., Molenaar, P. M. C.. Individuals as Dynamic Networks That Change: Merging Intraindividual Variability, Network Analysis, and Experience Sampling. Journal of Gerontology, Series B: Psychological Sciences and Social Science.

Outline

  1. Prepare simulation of time-series data
  2. Model time-series data with uSEM
  3. Post-processing of LISREL output
  4. Plot network graph
  5. Calculate network metrics
  6. Demonstrate network metrics

Introduction

Lifespan developmental theories view persons as dynamic systems, with feelings, thoughts and actions that are interconnected and change over time. In order to understand this aspect of individual’s psychological functioning, we need repeatedly measured multivariate psychological data, as well as methods to quantify the interrelations among variables. Based upon the interrelations, we can examine the long-term change of the interrelations by applying network analysis methods.

In the paper aforementioned, we forward an approach that merges intraindividual variability methods, network analysis methods, and measurement-burst designs in order to describe the interplay among many aspects of functioning and the change in this interplay over time.

uSEM (unified Structural Equation Modeling, Beltz et al., 2013; Gates et al., 2010; Kim et al., 2007) incorporates time-lagged and contemporaneous relations in one model. uSEM utilizes the intraindividual variablity and is caplable of modeling high-dimensional time-series data.

Network analysis: After the interactive relations are estimated by uSEM, network metrics (e.g. density, centralization, clustering) could be used to quantify the long-term change in network structure to capture system re-organization and adaptation to life change.

Step 1: Prepare simulation of time-series data

We simulate a time-series data based on the covariance matrix among variables. we use mvnorm fuction (in MASS package) to simulate multivariate time-series. The covariance matrix is as follows:

\(\Sigma = \Lambda (I-\ B \ )^{-1}\Psi (I- \ B \ )^{-T} \Lambda ^{T} + \Theta\)

where the \(\Lambda\) matrix is identity matrix, and \(\Theta\) matrix is zero matrix. Therefore, we only need to specify the \(\ B\) matrix and the \(\Psi\) matrix.

(For a detailed explaination to code Greek Letters in R Markdown, please go to: http://www.calvin.edu/~rpruim/courses/m343/F12/RStudio/LatexExamples.html)

Number of variables, number of measurments (time) are variables names are as follows:

The matrix expression (also referred as beta matrix) of the network shown in Figure 1 is as follows:

\[\mathbf{\ B} = \left[\begin{array} {rrr} 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.15& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& -0.32& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& -0.41& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& -0.9& 0& 0& 0& 0.22& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.54& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.32& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& -0.29& 0& 0& 0& 0& 0& 0& 0.3& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.32& 0.34& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.54& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.26& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ \end{array}\right] \]

Note here since the beta matrix is a 26 by 26 matrix. The upper half of beta matrix is always zero, since lag-1 variables are not predicting themselves in this model; and lower half is a combination of phi matrix on the left and A matrix on the right.

The covariance matrix of residual term is as follows:

\[\mathbf{\Psi} = \left[\begin{array} {rrr} 46.35& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 27.87& 184.71& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 14.5& 34.68& 125& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ -17.01& -151.32& -117.89& 471.1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 2.3& -38.51& -54.68& 218.57& 268.14& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ -10.88& -20.58& -58.76& 114.68& 67.43& 632.73& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 8.63& -69.95& -21.96& 133.28& 28.49& 82.41& 409.79& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1.97& -70.44& -25.13& 108.28& 68.07& 20.76& 25.83& 159.65& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 3.24& -54& -63.7& 224.74& 161.58& 90.32& 63.06& 101.28& 285.92& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ -7.06& -31.3& 2.95& 97.26& 37.11& 77.43& 83.02& 52.42& 54.77& 251.52& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ -7.57& -4.64& -17.31& 43.22& 45.62& 76.52& 63.76& 24.68& 18.69& 0.91& 515.55& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ -11.65& -38.88& -36.33& 139.28& 54.48& 159.46& 117.4& 55.85& 97.35& 166.26& 30.91& 300.95& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ -14.39& 4.85& -16.3& 32.63& -9.97& 19.59& -10.67& 24.65& -24.38& 29.3& -11.27& 20.84& 344.19& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 42.16& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 136& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 125.07& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 310.51& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 167.26& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 551.89& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 365.46& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 108.68& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 157.79& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 157.87& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 517.32& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 255.01& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 350.84\\ \end{array}\right] \]

library(MASS)
library(matrixcalc)
require(ggplot2)
require(reshape)

n.obs <- 150
p <- 13

beta <- c(
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0.15,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  -0.32,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  -0.41,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  -0.90,  0,  0,  0,  0.22,   0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0.54,   0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0.32,   0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  -0.29,  0,  0,  0,  0,  0,  0,  0.30,   0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0.32,   0.34,   0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0.54,   0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0.26,   0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0
)

beta <- matrix(beta, nrow = p * 2, ncol = p * 2)
  

psi <- c(
46.35,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
27.87,  184.71, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
14.5,   34.68,  125,    0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
-17.01, -151.32,    -117.89,    471.1,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
2.3,    -38.51, -54.68, 218.57, 268.14, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
-10.88, -20.58, -58.76, 114.68, 67.43,  632.73, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
8.63,   -69.95, -21.96, 133.28, 28.49,  82.41,  409.79, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
1.97,   -70.44, -25.13, 108.28, 68.07,  20.76,  25.83,  159.65, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
3.24,   -54,    -63.7,  224.74, 161.58, 90.32,  63.06,  101.28, 285.92, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
-7.06,  -31.3,  2.95,   97.26,  37.11,  77.43,  83.02,  52.42,  54.77,  251.52, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
-7.57,  -4.64,  -17.31, 43.22,  45.62,  76.52,  63.76,  24.68,  18.69,  0.91,   515.55, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
-11.65, -38.88, -36.33, 139.28, 54.48,  159.46, 117.4,  55.85,  97.35,  166.26, 30.91,  300.95, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
-14.39, 4.85,   -16.3,  32.63,  -9.97,  19.59,  -10.67, 24.65,  -24.38, 29.3,   -11.27, 20.84,  344.19, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  42.16,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  136,    0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  125.07, 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  310.51, 0,  0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  167.26, 0,  0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  551.89, 0,  0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  365.46, 0,  0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  108.68, 0,  0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  157.79, 0,  0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  157.87, 0,  0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  517.32, 0,  0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  255.01, 0,
0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  350.84

)

psi <- matrix(psi, nrow = p * 2, ncol = p * 2, byrow = FALSE, dimnames = NULL)

i <- diag(1,p * 2,p * 2)

inversed.matrix <- matrix.inverse(as.matrix(i - beta)) 

sigma.matrix <- inversed.matrix %*% psi %*% t(matrix.inverse(as.matrix(i - beta)))
var.mean <- c(4.69,8.33,8.46,46.82,31.21,54.55,51.51,74.43,59.02,78.57,60.47,77.88,51.3, 
              4.69,8.33,8.46,46.82,31.21,54.55,51.51,74.43,59.02,78.57,60.47,77.88,51.3)
time.series <- mvrnorm(n = n.obs, mu = var.mean, Sigma = sigma.matrix)
time.series <- data.frame(time.series)

names(time.series) <- c("shame.lag","anger.lag", "sadness.lag","happiness.lag","pride.lag","benefit.self.lag",
                        "benefit.other.lag", "self.esteem.lag", "control.lag", "other.communion.lag",
                        "self.communion.lag", "other.agency.lag", "self.agency", 
                        "shame", "anger", "sadness", "happiness", "pride", "benefit.self",
                        "benefit.other", "self.esteem", "control", "other.communion", 
                        "self.communion", "other.agency", "self.agency")

# forcing the maximum and minimum as 100 and 0 respectively
for (col in 1:p * 2)
{
  time.series[,col ] <- ifelse(time.series[,col ] > 100, 
                                     100, time.series[,col ])
  time.series[,col ] <- ifelse(time.series[,col] < 0, 
                                     0, time.series[,col ])
}

Then we use “melt” function to change the time-series to “long” format. Then we can plot the simulated data by variable name, using “facet_wrap” function. Here, we got rid of the label of variables, so that the variation of variables is shown more clearly. The order of the variables is: “shame”, “anger”, “sadness”, “happiness”, “pride”, “benefit.self”,“benefit.other”, “self.esteem”, “control”, “other.communion”, “self.communion”, “other.agency”, “self.agency”.

# we only plot the contemporaneous time-series here, because the lag-1 time-series is very much similar to the contemporaneous one
time.series <- time.series[, (p+1):(2*p)]
time.series$interaction <- seq(1,length(time.series[,1]),1)

time.series.long <- melt(time.series, id="interaction")  # convert to long format

ggplot(data=time.series.long,
       aes(x=interaction, y=value, colour=variable)) +
  geom_line() + 
  facet_wrap( ~ variable  , nrow = p * 2) +
  scale_y_continuous(breaks = seq(0, 100, by = 50)) + 
  theme(
    strip.background = element_blank(),
    strip.text.x = element_blank(),
    strip.text.y = element_blank(),
    axis.text.y = element_text(), 
    axis.title.y = element_blank(),
    legend.position="none"
    )

Step 2: Model time-series data with uSEM

We modeled the time-series data with the uSEM model.

\(y_{t} = \Lambda \ \eta_{t} + \epsilon_{t}\)

\(\eta_{t} = \ A \ \eta_{t} + \Phi \ \eta_{t-1} + \zeta_{t}\)

The code was run on LISREL 9.2 software, and it is shown as follows:

Beta matrix is output in the “beta.txt” file by LISREL.

Step 3: Post-processing of LISREL output

LISREL output multiple beta matrices in “beta.txt” if you use the automatic search function (“AM”, see LISREL code in Step 2). These beta matrices are estimation results of each iteration. For our analysis purposes, we are only interested in the final (also the last) beta matrix, where the iteration stopped.

LISREL also has its specific format of output files (number of columns in the output file is fixed at 6). So here we have to do some counting to read the right chunk of output file (of beta matrices), so that we get the final beta matrix.

setwd("C:/Users/Xiao Yang/Downloads/4 JoG manuscripts and code/QuantDev Website/Tutorials/6. IIV + Network")

beta.file <- read.csv(paste("ID-BETA.csv", sep = ""), header = FALSE )

# dimension of beta matrix is 26 by 26
no.latent.variables <- 26
# number of cells in the beta matrix
no.elements <- no.latent.variables ^ 2 

# get the number of columns in the LISREL output file of beta matrix 
col.no <- ncol(beta.file) 

# number of columns of the last row of the LISREL output file of beta matrix
no.columns.of.last.row <- no.elements %% col.no 
#  number of rows of a beta matrix  of the LISREL output file of beta matrix
no.row.of.beta.matrix <- no.elements %/% col.no + ifelse(no.columns.of.last.row>0,1,0) 

# initiate an empty beta matrix, with size "no.elements"
beta.matrix <- rep(0, no.elements) 
# calculate which line does the last beta matrix start
row.index <- nrow(beta.file)- no.row.of.beta.matrix + 1 
# variable total is to keep track of how many cells have been read
total <- 0 

# start reading from the line where the last beta matrix starts, and convert the scientific notation which is read as a text, e.g. 0.00000D+00, to a numeric value. 
for (row in row.index: (row.index + no.row.of.beta.matrix -1))
{
  for (col in 1: col.no)
  {
    total <- total + 1
    if (total <= no.elements)
    {
      element <- as.character(beta.file[row,col]) 
      # replace the character "D" to "e", so that in the next step, 
      # the text 0.00000D+00 could be recognized as 0.00000e+00 
      # and converted to a numeric value
      element <- gsub("D", "e", element) 
      element <- as.numeric(element)
      beta.matrix[total] <- element
    }
  }
}

beta.matrix.2 <- matrix(beta.matrix, nrow = 26, ncol = 26, byrow = TRUE)
beta.matrix.2 <- data.frame(beta.matrix.2)

# assign column names to the data.frame of beta.matrix.2
colnames(beta.matrix.2) <- c(
  "other.communion.lag",
  "other.agency.lag",
  "self.communion.lag",
  "self.agency.lag",
  "benefit.self.lag",
  "benefit.other.lag",
  "control.lag",
  "esteem.lag",
  "shame.lag",
  "anger.lag",
  "sadness.lag",
  "happiness.lag",
  "pride.lag",
  "other.communion",
  "other.agency",
  "self.communion",
  "self.agency",
  "benefit.self",
  "benefit.other",
  "control",
  "esteem",
  "shame",
  "anger",
  "sadness",
  "happiness",
  "pride"
  
  )
# assign row names to the data.frame of beta.matrix.2
row.names(beta.matrix.2) <- c(
  "other.communion.lag",
  "other.agency.lag",
  "self.communion.lag",
  "self.agency.lag",
  "benefit.self.lag",
  "benefit.other.lag",
  "control.lag",
  "esteem.lag",
  "shame.lag",
  "anger.lag",
  "sadness.lag",
  "happiness.lag",
  "pride.lag",
  "other.communion",
  "other.agency",
  "self.communion",
  "self.agency",
  "benefit.self",
  "benefit.other",
  "control",
  "esteem",
  "shame",
  "anger",
  "sadness",
  "happiness",
  "pride"
  )
# save beta.matrix.2 as a csv file
write.csv(beta.matrix.2, paste("ID-edges.csv", sep = ""))

Step 4: Plot network graph

Here we are only interested in the comtemporaneous relations (also known as the A matrix). Also, we converted the A matrix in such a manner that the non-zero cells were converted to 1 and the rest remained zero.

require(qgraph)
a <- beta[(p+1):(2*p), (p+1):(2*p)]

presentation <- "circle" 

nodes <- c("shame", "anger", "sadness", "happiness", "pride", "benefit.self",
                  "benefit.other", "self.esteem", "control", "other.communion", 
                  "self.communion", "other.agency", "self.agency")
links <- a

# convert text to numeric number (double)
for (col in 1:length(links[1,]))
{
    links[,col] <- as.double(links[,col])
}

links <- as.matrix(links)

# convert weight to 1,or 0
for (row in 1:dim(links)[1])
{
  for (col in 1:dim(links)[2])
  {
    if (links[row,col] != 0)
    {
      links[row,col] = 1
    }
  }
}

W2E <-function(x) cbind(which(x!=0,arr.ind=TRUE),x[x!=0])

eContemporaneous <- W2E(links)

#Note the trick to place the node name as wrapped lines, you need to add an "enter" in the two words, e.g. "benefit self".

plot.names <- c("shame",
                "anger",
                "sadness",
                "happiness",
                "pride",
                "benefit
self",
                "benefit
other",
                "perceived
control",
                "self
esteem", 
                "communion
other",
                "agency
other",
                "communion
self",
                "agency
self")


qgraph(eContemporaneous,
                layout              = presentation,
                esize               = 3,
                asize               = 3,
                border.color        = "dark grey",
                borders             = TRUE,
                curve               = TRUE,
                fade                = FALSE,
                posCol              = "red",
                labels              = plot.names,
                label.cex           = 1.5,
                label.norm          = "O",
                label.scale         = FALSE
                )

Step 5: Calculate network metrics

Three network metrics (density, outdegree centralization and clustering) are calculated. Example of the calculation of the newtork in Figure 1 is shown below. Calculation is using various functions in the package igraph. Adjacency matrix is converted from beta matrix, where non-zero cells are replaced by value of 1 (done in Step 4).

require("igraph")
net <- graph.adjacency(links, mode="directed")

Density measures ‘functional interdependence’ across the nodes, is computed as,

\(Density = \frac{L}{N * (N-1)}\)

where \(L\) is the number of existing edges, and \(N\) is the number of nodes in the network.

graph.density(net, loops=FALSE)  
## [1] 0.07692308

Outdegree centralization measures the extent of uneven distribution of outdegree centrality (Note: when we refer to centrality here, we always refer to the outdegree centrality).

\(Centralization = \frac{\sum_{i=1}^N C_{Dmax} - C_{n_{i}}}{max (\sum_{i=1}^N C_{Dmax} - C_{n_{i}})}\)

where \(C_{Dmax}\) is the centrality of the most central node (node centrality), and \(C_{n_{i}}\) is the centrality of a given node \(i\); and \(max (\sum_{i=1}^N C_{Dmax} - C_{n_{i}})\) is the maximal theoretical centralized network given the number of total nodes.

centr_degree(net, mode = c( "out"), loops = FALSE,normalized = TRUE)$centralization
## [1] 0.1875

Clustering measures the degree to which nodes in the network form coordinated subsystems, by calculating the proportion of triangles observed in the network graph (scaled to network size through division by total number of possible triangles). Specifically,

\(Clustering = \frac{2 \sum_{i=1}^N e_{i}}{\sum_{i=1}^N k_{i} (k_{i} - 1)}\)

where \(e_{i}\) is the number of triangles formed on a given node \(i\), and \(k_{i}\) is the number of neighbor nodes of a given node \(i\) (Note: a neighbor node is a node that has a shared edge with the given node).

transitivity(net)
## [1] 0.125

Step 6: Demonstrate network metrics

Here is a demonstration of higher and lower metrics (density, centralization, clustering).

References:

Beltz, A. M., Beekman, C., Molenaar, P. C. M., & Buss, K. A. (2013). Mapping temporal dynamics in social interactions with unified structural equation modeling: A description and demonstration revealing time-dependent sex differences in play behavior. Applied Developmental Science, 17(3), 152-168. doi: 10.1080/10888691.2013.805953.

Csardi G., & Nepusz T. (2006). The igraph software package for complex network research, InterJournal, Complex Systems 1695. R package version 1.0.1. http://igraph.org/r/.

Epskamp, S., Cramer, A. O. J., Waldorp, L.J., Schmittmann, V.D., & Borsboom, D. (2012). qgraph: Network Visualizations of Relationships in Psychometric Data. Journal of Statistical Software, 48(4), 1-18. URL http://www.jstatsoft.org/v48/i04/. R package version 1.3.3. http://sachaepskamp.com/qgraph. doi: 10.18637/jss.v048.i04.

Gates, K. M., Molenaar, P. C. M., Hillary, F. G., Ram, N., & Rovine, M. J. (2010). Automatic search for fMRI connectivity mapping: An alternative to Granger causality testing using formal equivalences among SEM path modeling, VAR, and unified SEM. NeuroImage, 50(3), 1118-1125. doi: 10.1016/j.neuroimage.2009.12.117.

Jöreskog, K.G. (1997). LISREL 8: User’s Reference Guide. Chicago: Scientific Software International, Inc.

Jöreskog, K.G., & Sörbom, D. (1992). LISREL. Scientific Software International, Inc.

Kim, J., Zhu, W., Chang, L., Bentler, P. M., & Ernst, T. (2007). Unified Structural Equation Modeling Approach for the Analysis of Multisubject, Multivariate Functional MRI Data. Human Brain Mapping, 93, 85-93. doi:10.1002/hbm.20259