Compared to its predecessor models, ChatGPT also has the following features:
Errors can be acknowledged; if a user point out ChatGPT’s mistakes, ChatGPT will heed the comments and optimize its responses.
Incorrect premises can be challenged so as to reduce false descriptions; for example, when asked the question “what happened after Columbus’ arrival in America in 2015”, the chatbot will state that Columbus does not belong in that time and adjust its response accordingly.
Significant improvements in reducing harmful and inauthentic responses due to the training method that values moral principles; for example, ChatGPT now refuses to answer questions about how to bully others and points out that the request goes against moral principles.
In addition, ChatGPT cannot be improved without big models, big data, and big computing power. Behind ChatGPT becoming an AIGC milestone, it is the big-model training supported by computing power development and the “big data” formed in the digital era that help ChatGPT to come to fruition. ChatGPT, developed by OpenAI, is a fine-tuned GPT-3.5 series model with as many as 175 billion model parameters and has finished its training at the beginning of this year. The model has been pre-trained with the support of big data, and the public crawler dataset used mainly by OpenAI boasts a vocabulary of over a trillion words of natural language. In terms of computing power, GPT-3.5 is trained on Microsoft Azure supercomputing infrastructure with a total computing power consumption of about 3640 PF-days. The introduction of ChatGPT-4 into YURI AI can make the YURI AI gaming ecosystem more user-friendly and intelligent.
Last updated