Get startedGet started for free

Tf-idf

While counts of occurrences of words can be useful to build models, words that occur many times may skew the results undesirably. To limit these common words from overpowering your model a form of normalization can be used. In this lesson you will be using Term frequency-inverse document frequency (Tf-idf) as was discussed in the video. Tf-idf has the effect of reducing the value of common words, while increasing the weight of words that do not occur in many documents.

This exercise is part of the course

Feature Engineering for Machine Learning in Python

View Course

Exercise instructions

  • Import TfidfVectorizer from sklearn.feature_extraction.text.
  • Instantiate TfidfVectorizer while limiting the number of features to 100 and removing English stop words.
  • Fit and apply the vectorizer on text_clean column in one step.
  • Create a DataFrame tv_df containing the weights of the words and the feature names as the column names.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Import TfidfVectorizer
____

# Instantiate TfidfVectorizer
tv = ____

# Fit the vectroizer and transform the data
tv_transformed = ____(speech_df['text_clean'])

# Create a DataFrame with these features
tv_df = pd.DataFrame(tv_transformed.____, 
                     columns=tv.____).add_prefix('TFIDF_')
print(tv_df.head())
Edit and Run Code