IniziaInizia gratis

Splitting by character

A key process in implementing Retrieval Augmented Generation (RAG) is splitting documents into chunks for storage in a vector database.

There are several splitting strategies available in LangChain, some with more complex routines than others. In this exercise, you'll implement a character text splitter, which splits documents based on characters and measures the chunk length by the number of characters.

Remember that there is no ideal splitting strategy, you may need to experiment with a few to find the right one for your use case.

Questo esercizio fa parte del corso

Developing LLM Applications with LangChain

Visualizza il corso

Istruzioni dell'esercizio

  • Import the CharacterTextSplitter class from langchain_text_splitters.
  • Create a CharacterTextSplitter instance with separator="\n", chunk_size=24, and chunk_overlap=10.
  • Use the .split_text() method to split the quote and print the chunks and chunk lengths.

Esercizio pratico interattivo

Prova a risolvere questo esercizio completando il codice di esempio.

# Import the character splitter
from langchain_text_splitters import ____

quote = 'Words are flowing out like endless rain into a paper cup,\nthey slither while they pass,\nthey slip away across the universe.'
chunk_size = 24
chunk_overlap = 10

# Create an instance of the splitter class
splitter = CharacterTextSplitter(
    separator=____,
    chunk_size=____,
    chunk_overlap=____)

# Split the string and print the chunks
docs = splitter.____(quote)
print(docs)
print([len(doc) for doc in docs])
Modifica ed esegui il codice