Information loss in factorization
You may wonder how the factors with far fewer columns can summarize a larger DataFrame without loss. In fact, it doesn't — the factors we create are generally a close approximation of the data, as it is inevitable for some information to be lost. This means that predicted values might not be exact, but should be close enough to be useful.
In this exercise, you will inspect the same original pre-factorization DataFrame from the last exercise loaded as original_df
, and compare it to the product of its two factors, user_matrix
and item_matrix
.
Cet exercice fait partie du cours
Building Recommendation Engines in Python
Instructions
- Find the dot product of
user_matrix
anditem_matrix
and store it aspredictions_df
.
Exercice interactif pratique
Essayez cet exercice en complétant cet exemple de code.
import numpy as np
# Multiply the user and item matrices
predictions_df = ____.____(____, ____)
# Inspect the recreated DataFrame
print(predictions_df)
# Inspect the original DataFrame and compare
print(original_df)