LoslegenKostenlos loslegen

Information loss in factorization

You may wonder how the factors with far fewer columns can summarize a larger DataFrame without loss. In fact, it doesn't — the factors we create are generally a close approximation of the data, as it is inevitable for some information to be lost. This means that predicted values might not be exact, but should be close enough to be useful.

In this exercise, you will inspect the same original pre-factorization DataFrame from the last exercise loaded as original_df, and compare it to the product of its two factors, user_matrix and item_matrix.

Diese Übung ist Teil des Kurses

Building Recommendation Engines in Python

Kurs anzeigen

Anleitung zur Übung

  • Find the dot product of user_matrix and item_matrix and store it as predictions_df.

Interaktive Übung

Vervollständige den Beispielcode, um diese Übung erfolgreich abzuschließen.

import numpy as np

# Multiply the user and item matrices
predictions_df = ____.____(____, ____)
# Inspect the recreated DataFrame
print(predictions_df)

# Inspect the original DataFrame and compare
print(original_df)
Code bearbeiten und ausführen