CHATGPT, TEXT, INFORMATION: CRITICAL ANALYSIS

Keywords: artificial intelligence, neural network, large language models, LLM, chatbot, disinformation, data hallucination, data poisoning, media, journalism

Abstract

The paper deals with theory and practice issues related to such type of artificial intelligence as large language models, in particular, ChatGPT. The main attention is paid to spheres of human activity, in which the exchange of information stated in the form of text is of the greatest importance: science, education and journalism (media sphere).

The experience of user interaction with chatbots is described. The working principle of large language models is discussed in some detail. This allows to lead the reader to the conclusion that chatbots cannot and should not be able to carry out the thinking process instead of a human and create meaningful, truthful texts that would not need careful checking and editing.

The author also substantiates the conclusion that artificial intelligence (at least large language models) does not imitate human activity, but carries out activity of a fundamentally different kind.

In the final part of the paper, the author debunks the myth that chatbots can cause irreparable damage to human civilization by introducing misinformation.

Downloads

Download data is not yet available.

Author Biography

Marina N. KOMASHKO, National Research University Higher School of Economics, Moscow, Russia

M.N. Komashko — Associate Professor, Department of Digital Law and Bio-Law, Faculty of Law, National Research University Higher School of Economics, Associate Fellow of the UNESCO Chair оn Copyright, Neighboring, Cultural and Information Rights at the National Research University Higher School of Economics, Candidate of Legal Sciences

Published
2024-08-23