Get startedGet started for free

Understanding principal components

Principal component analysis (PCA) reduces dimensionality by combining the non-overlapping feature information. PCA extracts new features called principal components that are independent of each other. One way to understand PCA is to plot the major principal components along the x- and y-axis and display the feature vectors. This allows you to see what features are contributing to each principal component. Though it is not always easy, it is good practice to name the principal components based on the features that contribute to them. However, as a feature extraction method, PCA is often difficult to interpret.

A subset of the credit data is contained in credit_df. The target variable is credit_score. The tidyverse and ggfortify packages have also been loaded for you.

This exercise is part of the course

Dimensionality Reduction in R

View Course

Exercise instructions

  • Perform principal component analysis on credit_df.
  • Use autoplot() to display the first two principal components, the feature vectors and labels, and encode credit_score in color.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Perform PCA
pca_res <- ___(___ %>% select(-___), scale. = ___)

# Plot principal components and feature vectors
___(___, 
         data = ___, 
         colour = '___', 
         alpha = 0.3,
         loadings = ___, 
         loadings.label = ___, 
         loadings.colour = "black", 
         loadings.label.colour = "black")
Edit and Run Code