YuriAI Guidebook
  • 🤖GET STARTED
    • About Yuri AI
  • 👁️‍🗨️BACKGROUND
    • ChatGPT
    • Compared to its predecessor models, ChatGPT also has the following features:
    • Arbitrum
    • AIGC
    • YURI AI Protocol
  • 🧿Our services
    • AIGC NFT - Yuri da Vinci
  • Yuri Digital Avatar -Yuri
  • 🦾YuriAI's Products
    • Introduction
    • AAVG (Action Adventure Game) - Kajama
    • Yuri Credit Card - SBT
    • Education Simulation - Yuri
    • "Yuri" - Your AI Assistant
  • 🦿Developer Documentation
    • Extensible API
    • Model of ChatGPT
      • Training Human-in-the-loop Reward Learning model
      • Reinforcement Learning Fine-tuning
  • 🗺️ROADMAP
    • Roadmap
  • 🔗SOCIAL MEDIAS' LINK
    • Links
  • Twitter
  • Discord
  • Telegram
  • Medium
Powered by GitBook
On this page
  1. BACKGROUND

ChatGPT

ChatGPT-4 incorporates a training method called “Reinforcement Learning from Human Feedback (RLHF)”. This method includes human demonstration of how the model should respond and ranking the responses from best to worst. In practice, human trainers play as both sides of the conversation, i.e., the user and the AI, and provides a sample conversation. When the human trainer plays the role of the chatbot, the model is asked to generate some suggestions to assist the trainer in providing the responses; then the trainer scores and ranks the responses and gives the better ones back to the model, fine-tuning and continuously iterating the model through the reward model mentioned above.

PreviousAbout Yuri AINextCompared to its predecessor models, ChatGPT also has the following features:

Last updated 2 years ago

👁️‍🗨️
Page cover image