1. Learn
  2. /
  3. Courses
  4. /
  5. Vector Databases for Embeddings with Pinecone

Connected

Exercise

RAG questions answering function

You're almost there! The final piece in the RAG workflow is to integrate the retrieved documents with a question-answering model.

A prompt_with_context_builder() function has already been defined and made available to you. This function takes the documents retrieved from the Pinecone index, and integrates them into a prompt that the question-answering model can ingest:

def prompt_with_context_builder(query, docs):
    delim = '\n\n---\n\n'
    prompt_start = 'Answer the question based on the context below.\n\nContext:\n'
    prompt_end = f'\n\nQuestion: {query}\nAnswer:'

    prompt = prompt_start + delim.join(docs) + prompt_end
    return prompt

You'll implement the question_answering() function, which will provide OpenAI's language model gpt-4o-mini with additional context and sources with which it can answer your questions.

Instructions

100 XP
  • Initialize the Pinecone client with your API key (the OpenAI client is available as client).
  • Retrieve the three most similar documents to the query text from the 'youtube_rag_dataset' namespace.
  • Generate a response to the provided prompt and sys_prompt using OpenAI's 'gpt-4o-mini' model, specified using the chat_model function argument.