Tf-idf
While counts of occurrences of words can be useful to build models, words that occur many times may skew the results undesirably. To limit these common words from overpowering your model a form of normalization can be used. In this lesson you will be using Term frequency-inverse document frequency (Tf-idf) as was discussed in the video. Tf-idf has the effect of reducing the value of common words, while increasing the weight of words that do not occur in many documents.
Deze oefening maakt deel uit van de cursus
Feature Engineering for Machine Learning in Python
Oefeninstructies
- Import
TfidfVectorizerfromsklearn.feature_extraction.text. - Instantiate
TfidfVectorizerwhile limiting the number of features to 100 and removing English stop words. - Fit and apply the vectorizer on
text_cleancolumn in one step. - Create a DataFrame
tv_dfcontaining the weights of the words and the feature names as the column names.
Praktische interactieve oefening
Probeer deze oefening eens door deze voorbeeldcode in te vullen.
# Import TfidfVectorizer
____
# Instantiate TfidfVectorizer
tv = ____
# Fit the vectroizer and transform the data
tv_transformed = ____(speech_df['text_clean'])
# Create a DataFrame with these features
tv_df = pd.DataFrame(tv_transformed.____,
columns=tv.____).add_prefix('TFIDF_')
print(tv_df.head())