Loading AI model at server startup
You have to deploy a trained sentiment analysis model that helps in moderating comments from users. To ensure zero downtime, the API needs to be ready to analyze user comments as soon as it starts up.
In this exercise, you'll implement FastAPI's lifespan events to load your model efficiently to build the comment moderation systems. The SentimentAnalyzer
model class is already defined and imported for you.
This exercise is part of the course
Deploying AI into Production with FastAPI
Exercise instructions
- Import the context manager decorator from the
contextlib
module to create the lifespan event. - Use FastAPI's context manager decorator to define the
lifespan
event function to ensure model loads at startup. - Call the function to load the model at startup in the
lifespan
event. - Yield to allow the server loading process to continue.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Import the context manager decorator from contextlib module
from contextlib import ____
sentiment_model = None
def load_model():
global sentiment_model
sentiment_model = SentmentAnalyzer("sentiment_model.joblib")
# Use FastAPI's context manager to define lifespan event
@____
def lifespan(app: FastAPI):
# Call the function to load the model
____
____