Understanding principal components
Principal component analysis (PCA) reduces dimensionality by combining the non-overlapping feature information. PCA extracts new features called principal components that are independent of each other. One way to understand PCA is to plot the major principal components along the x- and y-axis and display the feature vectors. This allows you to see what features are contributing to each principal component. Though it is not always easy, it is good practice to name the principal components based on the features that contribute to them. However, as a feature extraction method, PCA is often difficult to interpret.
A subset of the credit data is contained in credit_df
. The target variable is credit_score
. The tidyverse
and ggfortify
packages have also been loaded for you.
Diese Übung ist Teil des Kurses
Dimensionality Reduction in R
Anleitung zur Übung
- Perform principal component analysis on
credit_df
. - Use
autoplot()
to display the first two principal components, the feature vectors and labels, and encodecredit_score
in color.
Interaktive Übung
Versuche dich an dieser Übung, indem du diesen Beispielcode vervollständigst.
# Perform PCA
pca_res <- ___(___ %>% select(-___), scale. = ___)
# Plot principal components and feature vectors
___(___,
data = ___,
colour = '___',
alpha = 0.3,
loadings = ___,
loadings.label = ___,
loadings.colour = "black",
loadings.label.colour = "black")