Share on Facebook

Monday, January 23, 2023

The dangers of ChatGPT


Big Brother style van Gogh according to DALL.E (OpenAI) 

The new chatbot of OpenAI, ChatGPT, has dazzled the world with its ability to write texts (and to make images as well). More and more students use it for making their assignments. Schools and universities hit back with special software in order to detect artificially written texts, since they are seen as fraud. Some Australian universities have even decided that students must write their exams again with paper and pencil instead of with a computer. Artificial Intelligence may make life easier, but it has also brought a new kind of fraud in the world.
Anyway, when you ask a chatbot like ChatGPT to write a text about a certain theme, the program uses information, which it gets from a kind of inner library or by searching the Internet, or however. How it does this is not important here, but the fact that it does, makes that ChatGPT and other chatbots – but let me concentrate here on ChatGPT – function as search engines. That may be nice for the users, but not so for Google, which fears to lose one of its main tasks and so a source of income, if others take over their job. This can really happen, for Microsoft thinks already about investing another ten billion [10,000 million] dollars in OpenAI, the owner of ChatGPT. It could lead to the development of ChatGPT into a kind of search engine. The difference with a traditional search engine is that a traditional search engine like Google presents a list of websites where you can find the information you are looking for, while a ChatGPT-like search engine produces a text that is a kind of summary of the contents of such a list of websites (or whatever sources have been used).
At first sight, search-engines-new-style seem to be an interesting improvement of the search-engines-old-style. Indeed, it saves you a lot of work and you get a text that you can directly copy and paste in the text that you are writing. Nevertheless, I think that it is a dangerous development, full of pitfalls and big-brother-like consequences. As we saw in my blog last week, at least the present state of ChatGPT (or OpenAI) is such that the texts it produces are not reliable and can be full of incorrect information. I asked the program four times to write about me and my philosophy and I got four substantially different texts, and most in these texts was false. But how were these texts produced? I have no idea, since there were no quotations of sources. I must either take the texts as they are or do as yet my own research (but why then use ChatGPT?). I am afraid that most people will take a text written by ChatGPT as it is and will, for example, believe that there has been a philosopher Henk bij de Weg (1919-1991), who was once a professor at the University of Amsterdam; a person that never existed (see my blog last week).
However, even if ChatGPT would give the sources – and some other chatbots do –, how do we know why it has selected just these sources? What are the algorithms behind the texts? As said, in my last blog I asked four times more or less the same question and I got four different answers. How is that possible? Moreover, everybody knows that there are many websites on the Internet with false or fake information. Sometimes this information is incorrect by mistake; in other cases websites have purposefully been made in order to spread false information. How can an algorithm know whether the content of a website is false or true, fake or fact?
Then I want to mention yet a third point. A ChatGPT-like text writing program can be a dangerous instrument in the hands of a manipulator. It can be a useful instrument for a Big Brother in an Orwellian world, when developed as a text writing search engine in the sense just discussed. Look, for instance, at what is happening in a dictatorship like Russia, where fake facts are produced and spread by the official media. A search-engine-new-style would be an important extra tool for a dictatorship. It would also make it easier to change the facts and to rewrite history. Orwell writes in his novel 1984 how official history is continuously rewritten in order to fit new political developments. We see this happen also in dictatorships like Russia and China. However, also in more or less democratic societies this can happen. Chatbots are owned by private companies that have interests of their own. Sources with unwelcome information may not be considered, when search engines produce texts. Worse is that only or especially information is used that promote the interests of a company. If this happens – and when you look around, you can see that such things often happen – a search engine is no longer a reliable tool but it has become an instrument for manipulation.
Much more can be said about the dangers of ChatGPT-like search engines. On the Internet a lively discussion is going on about this question. The essence of the problem is that such search engines can easily generate biased, false and misleading information that is seen as correct information by its users. For why check it, if it looks true at first sight? Why not believe that there has been a philosopher Henk bij de Weg (1919-1991), who was once a professor at the University of Amsterdam, if ChatGPT says so? ChatGPT can develop into a handy instrument for writing reliable texts, but there is still a long way to go before we’ll be that far. And it is certainly not unthinkable that the development of ChatGPT-like search engines will be another step on the road to a Big Brother Society. 

No comments: