hh-rlhf
anthropics/hh-rlhf
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1,837stars
Forks
155
Open issues
0
Watchers
1,837
Size
28.1 MB
MIT License
Created: Apr 10, 2022
Updated: Apr 9, 2026
Last push: Jun 17, 2025
Archived