Parallel Analysis

About this Application

Patil et al. (2008) presented a web-based parallel analysis engine (Patil et al. 2007) that used SAS. This engine was published at

Since that application is facing few technical difficulties, this new application should be helpful in the interim while that is fixed. This application has been developed using the nFactors (Raiche 2010) and shiny (Chang et al. 2016) packages in R (R Core Team 2016). The shiny application is hosted on a shiny server, that has been provided by R-Studio to Gonzaga's School of Business.

Select References

Horn, J. L. (1965), “A Rationale and Test For the Number of Factors in Factor Analysis,” Psychometrika, 30, 179-85.

Patil, Vivek H., Surendra N. Singh, Sanjay Mishra,and D. Todd Donavan (2007),“Parallel Analysis Engine to Aid Determining Number of Factors to Retain” [Computer software]. Available from; Utility developed as part of Patil, Vivek H., Surendra N. Singh, Sanjay Mishra, and Todd Donovan (2008), “Efficient Theory Development and Factor Retention Criteria: A Case for Abandoning the ‘Eigenvalue Greater Than One’ Criterion,” Journal of Business Research, 61 (2), 162-170.

Raiche, G. (2010). nFactors: an R package for parallel analysis and non graphical solutions to the Cattell scree test.R package version 2.3.3.

R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL

Winston Chang, Joe Cheng, JJ Allaire, Yihui Xie and Jonathan McPherson (2016). shiny: Web Application Framework for R. R package version 0.14.2.

Using this Application

Based on parameters provided by the researcher, this engine calculates eigenvalues from randomly generated correlation matrices. These can be then compared with eigenvalues extracted from the researcher's dataset. The number of factors to retain will be the number of eigenvalues (generated from the researcher’s dataset) that are larger than the corresponding random eigenvalues (Horn 1965).

The default (and recommended) values for number of random correlation matrices and percentile of eigenvalues are 100 and 95 respectively (see Cota et al. 1993; Glorfeld 1995; Turner 1998; Velicer et al. 2000). Based on the nature of their particular dataset, researchers, can override these default options. Higher (lower) values of number of correlation matrices generated increase (decrease) computation time but provide more (fewer) data points in the distribution of different eigenvalues. The percentile determines the desired eigenvalue from this distribution, which is then used for comparison purposes. Lower values of the percentile tend to lead to over extraction (extraction of more factors than necessary).