CHATGPT, TEXT, INFORMATION: CRITICAL ANALYSIS
Abstract
The paper deals with theory and practice issues related to such type of artificial intelligence as large language models, in particular, ChatGPT. The main attention is paid to spheres of human activity, in which the exchange of information stated in the form of text is of the greatest importance: science, education and journalism (media sphere).
The experience of user interaction with chatbots is described. The working principle of large language models is discussed in some detail. This allows to lead the reader to the conclusion that chatbots cannot and should not be able to carry out the thinking process instead of a human and create meaningful, truthful texts that would not need careful checking and editing.
The author also substantiates the conclusion that artificial intelligence (at least large language models) does not imitate human activity, but carries out activity of a fundamentally different kind.
In the final part of the paper, the author debunks the myth that chatbots can cause irreparable damage to human civilization by introducing misinformation.