Page cover image

ChatGPT

ChatGPT-4 incorporates a training method called “Reinforcement Learning from Human Feedback (RLHF)”. This method includes human demonstration of how the model should respond and ranking the responses from best to worst. In practice, human trainers play as both sides of the conversation, i.e., the user and the AI, and provides a sample conversation. When the human trainer plays the role of the chatbot, the model is asked to generate some suggestions to assist the trainer in providing the responses; then the trainer scores and ranks the responses and gives the better ones back to the model, fine-tuning and continuously iterating the model through the reward model mentioned above.

Last updated