Get startedGet started for free

More than words: tokenization (3)

ft_tokenizer() uses a simple technique to generate words by splitting text data on spaces. For more advanced usage, you can use regular expressions to split the text data. This is done via the ft_regex_tokenizer() function, which has the same usage as ft_tokenizer(), but with an extra pattern argument for the splitter.

a_tibble %>%
  ft_regex_tokenizer("x", "y", pattern = regex_pattern)

The return value from ft_regex_tokenizer(), like ft_tokenizer(), is a list of lists of character vectors.

The dataset contains a field named artist_mbid that contains an ID for the artist on MusicBrainz, a music metadata encyclopedia website. The IDs take the form of hexadecimal numbers split by hyphens, for example, 65b785d9-499f-48e6-9063-3a1fd1bd488d.

This exercise is part of the course

Introduction to Spark with sparklyr in R

View Course

Exercise instructions

  • Select the artist_mbid field from track_metadata_tbl.
  • Split the MusicBrainz IDs into chunks of hexadecimal numbers.
    • Call ft_regex_tokenizer().
    • The output column should be called artist_mbid_chunks.
    • Use a hyphen, -, for the pattern argument.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# track_metadata_tbl has been pre-defined
track_metadata_tbl

track_metadata_tbl %>%
  # Select artist_mbid column
  ___ %>%
  # Split it by hyphens
  ___
Edit and Run Code