Star AlbumentationsX on GitHub — it powers this leaderboard

Star on GitHub
← Back to leaderboard
anthropics

hh-rlhf

anthropics/hh-rlhf

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"

1,817stars
Forks
153
Open issues
0
Watchers
1,817
Size
28.1 MB
MIT License
Created: Apr 10, 2022
Updated: Feb 24, 2026
Last push: Jun 17, 2025
Archived