What is Factor Analysis In Behavioral Science?

What is Factor Analysis?

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of fewer unobserved variables called factors. The technique is used to identify underlying relationships between variables to reduce the number of variables in a dataset or to detect structure in the relationships between variables, which may be classified as ‘latent variables’ — not directly measured but inferred from other variables.

Why is it important?

The significance of factor analysis lies in its ability to simplify data by reducing the number of observed variables to a smaller number of factors. This simplification can help in understanding concepts or identifying patterns that are not immediately obvious, aiding in data interpretation and decision-making processes. In neuroscience and behavioral science, factor analysis is particularly important for:

  • Understanding Complex Constructs: It helps in conceptualizing psychological constructs like intelligence or personality which cannot be directly measured.
  • Data Reduction: Reduces a large set of variables to a smaller, more manageable number of factors without significant loss of information.
  • Hypothesis Testing: Assists in testing theories and hypotheses about the interrelations among psychological phenomena.
  • Instrument Development: Plays a crucial role in the development and validation of assessment tools and questionnaires.

How does it work?

Factor analysis usually involves the following steps:

  • Identifying Correlated Variables: The researcher selects a set of observed variables that are thought to be influenced by underlying factors.
  • Extracting Factors: Mathematical algorithms identify the underlying factors that can account for the patterns of correlation among the observed variables.
  • Determining the Number of Factors: The researcher decides how many factors to retain, often using criteria like eigenvalues, scree plots, or explained variance.
  • Rotating Factors: Rotation techniques are applied to make the output more interpretable by simplifying the factors to enhance their distinctiveness.
  • Interpreting Factors: Variables that have high loadings on the same factor suggest that the factor represents a dimension along which the variables vary in tandem.

What are its properties?

  • Factor Loadings: Numerical values indicating the extent to which an observed variable is associated with a particular factor.
  • Eigenvalues: Measure the variance in all the variables which is accounted for by that factor.
  • Communality: Proportion of each variable’s variance that can be explained by the factors.
  • Uniqueness: Variance of an observed variable that is unique to it, and not shared with other variables.
  • Factor Scores: Scores calculated for each individual on the extracted factors for further analysis.
  • Orthogonal and Oblique Rotation: Orthogonal rotations assume factors are uncorrelated, while oblique rotations allow for correlations between factors.

How is it measured?

The measurement in factor analysis involves two main components:

  • Extraction Methods: Several methods like Principal Component Analysis, Principal Axis Factoring, and Maximum Likelihood are used to extract factors.
  • Rotation Methods: Methods such as Varimax, Quartimax, and Promax are used to rotate the factors for clearer interpretation.

What are its relationships to other concepts?

Factor analysis is connected to several other statistical techniques and concepts, including:


    • Factor analysis begins with a correlation matrix that depicts the relationships between all variables.

Principal Component Analysis (PCA)

    • Often used interchangeably with factor analysis, though PCA focuses on total variance as opposed to shared variance.

Structural Equation Modeling (SEM)

    • SEM includes factor analysis as a component and extends it by allowing for the specification and testing of models that include both latent and observed variables.

Cluster Analysis

  • While factor analysis identifies underlying dimensions, cluster analysis groups observations, not variables, based on their profiles across several variables.

What are its limitations?

  • Subjectivity: The interpretation of factors can be subjective and may require domain-specific expertise.
  • Linearity and Normality Assumptions: Factor analysis assumes a linear relationship among variables and often assumes normality.
  • Sample Size: Reliable factor analysis requires a sufficiently large sample size, with a general rule of thumb being at least five observations per variable.
  • Communality Estimations: Initial estimates of communalities can significantly influence the results.
  • Rotation Ambiguity: Different rotation methods can produce different factor solutions, leading to different interpretations.

How is it used?

Factor analysis is widely used across various fields, particularly in:

  • Psychology: For the construction of psychological scales and identification of personality traits or cognitive abilities.
  • Market Research: To identify patterns in consumer behavior or preferences.
  • Social Sciences: To examine attitudes, socioeconomic factors, or educational measures.
  • Neuroscience: To parse out components of complex cognitive or behavioral processes.

What is its history?

The origins of factor analysis can be traced back to the early 20th century, with the pioneering work of psychologists like Charles Spearman, who used the technique to support his theory of a general intelligence factor. Later, Raymond Cattell expanded the application of factor analysis in the field of personality. The mathematical and statistical underpinnings were further developed by researchers such as Harold Hotelling and Thurstone.

What are its future possibilities?

The future possibilities for factor analysis are likely to include advancements in computational techniques that allow for more complex models and the analysis of larger data sets. Additionally, integration of factor analysis with machine learning and artificial intelligence could expand its applicability further, offering novel insights into big data in fields such as genomics, neuropsychology, and social network analysis.

Related Articles

Default Nudges: Fake Behavior Change

Default Nudges: Fake Behavior Change

Read Article →
​Here's Why the Loop is Stupid

Here’s Why the Loop is Stupid

Read Article →
How behavioral science can be used to build the perfect brand

How behavioral science can be used to build the perfect brand

Read Article →
The death of behavioral economics

The Death Of Behavioral Economics

Read Article →