Skip to content
@RLHFlow

RLHFlow

Code for the Workflow of Reinforcement Learning from Human Feedback (RLHF)

Popular repositories Loading

  1. RLHF-Reward-Modeling RLHF-Reward-Modeling Public

    Recipes to train reward model for RLHF.

    Python 1.4k 99

  2. Online-RLHF Online-RLHF Public

    A recipe for online RLHF and online iterative DPO.

    Python 528 50

  3. Online-DPO-R1 Online-DPO-R1 Public

    Codebase for Iterative DPO Using Rule-based Rewards

    Python 256 32

  4. Minimal-RL Minimal-RL Public

    Python 229 11

  5. Self-rewarding-reasoning-LLM Self-rewarding-reasoning-LLM Public

    Recipes to train the self-rewarding reasoning LLMs.

    Python 226 10

  6. Directional-Preference-Alignment Directional-Preference-Alignment Public

    Directional Preference Alignment

    59 3

Repositories

Showing 10 of 10 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…