Get startedGet started for free

Information loss in factorization

You may wonder how the factors with far fewer columns can summarize a larger DataFrame without loss. In fact, it doesn't — the factors we create are generally a close approximation of the data, as it is inevitable for some information to be lost. This means that predicted values might not be exact, but should be close enough to be useful.

In this exercise, you will inspect the same original pre-factorization DataFrame from the last exercise loaded as original_df, and compare it to the product of its two factors, user_matrix and item_matrix.

This exercise is part of the course

Building Recommendation Engines in Python

View Course

Exercise instructions

  • Find the dot product of user_matrix and item_matrix and store it as predictions_df.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

import numpy as np

# Multiply the user and item matrices
predictions_df = ____.____(____, ____)
# Inspect the recreated DataFrame
print(predictions_df)

# Inspect the original DataFrame and compare
print(original_df)
Edit and Run Code