Inference providers
In some cases, you may lack the hardware to run Hugging Face models locally. Large-parameter LLMs, and image and video generation models in particular often require Graphics Processing Units (GPUs) to parallelize the computations. Hugging Face providers inference providers to outsource this hardware to third-party partners.
Bu egzersiz
Working with Hugging Face
kursunun bir parçasıdırUygulamalı interaktif egzersiz
İnteraktif egzersizlerimizden biriyle teoriyi pratiğe dökün
Egzersizi başlat