MulaiMulai sekarang secara gratis

Checking the reward model

You go back to fine-tuning the model and notice that the model's performance is still worse compared to the base model. This time, you want to inspect the reward model, and you've produced a dataset with a set of results from the model that you're going to analyze. What checks will you make on the output data?

The dataset has been pre-imported as reward_model_results.

Latihan ini adalah bagian dari kursus

Reinforcement Learning from Human Feedback (RLHF)

Lihat Kursus

Latihan interaktif praktis

Ubah teori menjadi tindakan dengan salah satu latihan interaktif kami.

Mulai berolahraga