IniziaInizia gratis

Optimizing models for scalability

Deploying AI models efficiently is crucial for real-world applications where inference speed, model size, and computational efficiency matter. Now we will test your ability to save and load models for deployment. You will use techniques like TorchScript export to complete the workflow. The dataset used is the variation of MNIST dataset.

By completing this exercise, you will have prepared a model optimized for deployment while applying advanced techniques learned in this lesson.

X_test, y_test datasets as well as torch.jit have been preloaded for you.

Questo esercizio fa parte del corso

Scalable AI Models with PyTorch Lightning

Visualizza il corso

Istruzioni dell'esercizio

  • Export the model to TorchScript using trace function.
  • Save the model to TorchScript.
  • Load the saved model.

Esercizio pratico interattivo

Prova a risolvere questo esercizio completando il codice di esempio.

# Export model to TorchScript
scripted_model = torch.jit.____(model, torch.tensor(X_test[:1], dtype=torch.float32).unsqueeze(1))
# Save model to TorchScript
torch.jit.____(scripted_model, 'model.pt')

# Loaded saved model
loaded_model = torch.jit.____('____.pt')
# Validate inference on test dataset
test_loader = DataLoader(TensorDataset(torch.tensor(X_test, dtype=torch.float32).unsqueeze(1), ____), batch_size=64)

accuracy = evaluate_model(loaded_model, test_loader)

print(f"Optimized model accuracy: {accuracy:.2%}")
Modifica ed esegui il codice