YuriAI Guidebook
  • 🤖GET STARTED
    • About Yuri AI
  • 👁️‍🗨️BACKGROUND
    • ChatGPT
    • Compared to its predecessor models, ChatGPT also has the following features:
    • Arbitrum
    • AIGC
    • YURI AI Protocol
  • 🧿Our services
    • AIGC NFT - Yuri da Vinci
  • Yuri Digital Avatar -Yuri
  • 🦾YuriAI's Products
    • Introduction
    • AAVG (Action Adventure Game) - Kajama
    • Yuri Credit Card - SBT
    • Education Simulation - Yuri
    • "Yuri" - Your AI Assistant
  • 🦿Developer Documentation
    • Extensible API
    • Model of ChatGPT
      • Training Human-in-the-loop Reward Learning model
      • Reinforcement Learning Fine-tuning
  • 🗺️ROADMAP
    • Roadmap
  • 🔗SOCIAL MEDIAS' LINK
    • Links
  • Twitter
  • Discord
  • Telegram
  • Medium
Powered by GitBook
Page cover
On this page
  1. 👁️‍🗨️BACKGROUND

Compared to its predecessor models, ChatGPT also has the following features:

PreviousChatGPTNextArbitrum

Last updated 2 years ago

CtrlK
  1. Errors can be acknowledged; if a user point out ChatGPT’s mistakes, ChatGPT will heed the comments and optimize its responses.

  2. Incorrect premises can be challenged so as to reduce false descriptions; for example, when asked the question “what happened after Columbus’ arrival in America in 2015”, the chatbot will state that Columbus does not belong in that time and adjust its response accordingly.

  3. Significant improvements in reducing harmful and inauthentic responses due to the training method that values moral principles; for example, ChatGPT now refuses to answer questions about how to bully others and points out that the request goes against moral principles.

In addition, ChatGPT cannot be improved without big models, big data, and big computing power. Behind ChatGPT becoming an AIGC milestone, it is the big-model training supported by computing power development and the “big data” formed in the digital era that help ChatGPT to come to fruition. ChatGPT, developed by OpenAI, is a fine-tuned GPT-3.5 series model with as many as 175 billion model parameters and has finished its training at the beginning of this year. The model has been pre-trained with the support of big data, and the public crawler dataset used mainly by OpenAI boasts a vocabulary of over a trillion words of natural language. In terms of computing power, GPT-3.5 is trained on Microsoft Azure supercomputing infrastructure with a total computing power consumption of about 3640 PF-days. The introduction of ChatGPT-4 into YURI AI can make the YURI AI gaming ecosystem more user-friendly and intelligent.