Loading AI model at server startup
You have to deploy a trained sentiment analysis model that helps in moderating comments from users. To ensure zero downtime, the API needs to be ready to analyze user comments as soon as it starts up.
In this exercise, you'll implement FastAPI's lifespan events to load your model efficiently to build the comment moderation systems. The SentimentAnalyzer model class is already defined and imported for you.
Questo esercizio fa parte del corso
Deploying AI into Production with FastAPI
Istruzioni dell'esercizio
- Import the context manager decorator from the
contextlibmodule to create the lifespan event. - Use FastAPI's context manager decorator to define the
lifespanevent function to ensure model loads at startup. - Call the function to load the model at startup in the
lifespanevent. - Yield to allow the server loading process to continue.
Esercizio pratico interattivo
Prova a risolvere questo esercizio completando il codice di esempio.
# Import the context manager decorator from contextlib module
from contextlib import ____
sentiment_model = None
def load_model():
global sentiment_model
sentiment_model = SentmentAnalyzer("sentiment_model.joblib")
# Use FastAPI's context manager to define lifespan event
@____
def lifespan(app: FastAPI):
# Call the function to load the model
____
____