BaşlayınÜcretsiz Başlayın

Optimizing models for scalability

Deploying AI models efficiently is crucial for real-world applications where inference speed, model size, and computational efficiency matter. Now we will test your ability to save and load models for deployment. You will use techniques like TorchScript export to complete the workflow. The dataset used is the variation of MNIST dataset.

By completing this exercise, you will have prepared a model optimized for deployment while applying advanced techniques learned in this lesson.

X_test, y_test datasets as well as torch.jit have been preloaded for you.

Bu egzersiz

Scalable AI Models with PyTorch Lightning

kursunun bir parçasıdır
Kursu Görüntüle

Egzersiz talimatları

  • Export the model to TorchScript using trace function.
  • Save the model to TorchScript.
  • Load the saved model.

Uygulamalı interaktif egzersiz

Bu örnek kodu tamamlayarak bu egzersizi bitirin.

# Export model to TorchScript
scripted_model = torch.jit.____(model, torch.tensor(X_test[:1], dtype=torch.float32).unsqueeze(1))
# Save model to TorchScript
torch.jit.____(scripted_model, 'model.pt')

# Loaded saved model
loaded_model = torch.jit.____('____.pt')
# Validate inference on test dataset
test_loader = DataLoader(TensorDataset(torch.tensor(X_test, dtype=torch.float32).unsqueeze(1), ____), batch_size=64)

accuracy = evaluate_model(loaded_model, test_loader)

print(f"Optimized model accuracy: {accuracy:.2%}")
Kodu Düzenle ve Çalıştır