1. 学ぶ
  2. /
  3. コース
  4. /
  5. Natural Language Processing (NLP) in Python

Connected

演習

Visualizing and comparing word embeddings

Word embeddings are high-dimensional, making them hard to interpret directly. In this exercise, you'll project a few word vectors down to 2D using Principal Component Analysis (PCA) and visualize them. This helps reveal semantic groupings or similarities between words in the embedding space. Then, you will compare the embedding representations of two models: glove-wiki-gigaword-50 available through the variable model_glove_wiki, and glove-twitter-25 available through model_glove_twitter.

指示1 / 2

undefined XP
  • 1
    • Extract the embeddings of each word using model_glove_wiki and reduce the dimensions with PCA.
  • 2
    • Extract the embeddings of each word using model_glove_twitter.