ComeçarComece de graça

Tf-idf

While counts of occurrences of words can be useful to build models, words that occur many times may skew the results undesirably. To limit these common words from overpowering your model a form of normalization can be used. In this lesson you will be using Term frequency-inverse document frequency (Tf-idf) as was discussed in the video. Tf-idf has the effect of reducing the value of common words, while increasing the weight of words that do not occur in many documents.

Este exercício faz parte do curso

Feature Engineering for Machine Learning in Python

Ver curso

Instruções do exercício

  • Import TfidfVectorizer from sklearn.feature_extraction.text.
  • Instantiate TfidfVectorizer while limiting the number of features to 100 and removing English stop words.
  • Fit and apply the vectorizer on text_clean column in one step.
  • Create a DataFrame tv_df containing the weights of the words and the feature names as the column names.

Exercício interativo prático

Experimente este exercício completando este código de exemplo.

# Import TfidfVectorizer
____

# Instantiate TfidfVectorizer
tv = ____

# Fit the vectroizer and transform the data
tv_transformed = ____(speech_df['text_clean'])

# Create a DataFrame with these features
tv_df = pd.DataFrame(tv_transformed.____, 
                     columns=tv.____).add_prefix('TFIDF_')
print(tv_df.head())
Editar e executar o código