Illustrating Reinforcement Learning from Human Feedback (RLHF)
Arkistoidut sarjat ("Toimeton syöte" status)
When? This feed was archived on February 21, 2025 21:08 (). Last successful fetch was on January 02, 2025 12:05 ()
Why? Toimeton syöte status. Palvelimemme eivät voineet hakea voimassa olevaa podcast-syötettä tietyltä ajanjaksolta.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 429711881 series 3498845
This more technical article explains the motivations for a system like RLHF, and adds additional concrete details as to how the RLHF approach is applied to neural networks.
While reading, consider which parts of the technical implementation correspond to the 'values coach' and 'coherence coach' from the previous video.
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Luvut
1. Illustrating Reinforcement Learning from Human Feedback (RLHF) (00:00:00)
2. RLHF: Let’s take it step by step (00:03:16)
3. Pretraining language models (00:03:51)
4. Reward model training (00:05:46)
5. Fine-tuning with RL (00:09:26)
6. Open-source tools for RLHF (00:16:10)
7. What’s next for RLHF? (00:18:20)
8. Further reading (00:21:17)
85 jaksoa