As long as there was an author who wrote the content, no matter how incorrect or inappropriate, the discourse had a real reference to the individual, who through writing the post, presented knowledge and consequently with his discourse produced power. Today, social media is filled with publications that are not actually written by the individual, but produce discourse, and consequently power. So, the production of knowledge and power is real, but the author is false. Since the author is no longer the individual, but the machine, it constitutes a false form of communication, therefore a new form of information disorder.
Alban ZENELI
Evidence of the use of so-called Artificial Intelligence in public writing is increasing daily. Political parties, politicians, analysts, assembly members, candidates, and even the media are using so-called AI to generate mostly textual content, mainly for social media.
'Tell me if you want me to reformulate, or rephrase this text, to adapt it to…..', 'chatGPT says:', etc., are the pieces of sentences that often 'betray' chatbox users, as they copy the generated content to post. If you read the posts, they seem correct, politically, linguistically, and to some extent even in content.
But what is missing that makes this content untrue, insincere, and immoral?
The author is nowhere to be seen. So there is a person who agreed with the content, but did not write it. Michel Foucault in Power and Knowledge once explained that it is absurd to deny the existence of the individual writer, but here we are in an era where in fact machines write and people agree with what they write. Without a doubt, this represents a clear change of eras because as Foucault explains, in fact the role of the author has been transformed even earlier, but certainly not so profound that it was completely absent.
"But I think that – at least since an era – the individual who sets out to write a text on whose horizon a possible work looms, reclaims for his own account the function of author..."
Beyond defining what the author is, his former role and the change he has undergone until his disappearance, he, the author, also represents public authority, knowledge, and consequently power. In fact, not the author as a person, but his discourse, according to Foucault represents a power through “discourses of truths”. He clearly states that no power can be exercised other than the production of truth.
"Power forces us to produce truth, and power itself cannot be exercised except by producing truth. This is a feature of every society, but I have the impression that in our society, the relationship between power, law, and truth is organized in a very special way."
As long as there was an author who wrote the content, no matter how incorrect and inappropriate, the discourse had a real reference to the individual, who through writing the post, presented knowledge and consequently with his discourse produced power. Today, social media is filled with publications that are not actually written by the individual, but produce discourse, and consequently power. So, the production of knowledge and power is real, but the author is false. Since the author is no longer the individual, but the machine constitutes a false form of communication, therefore a new form of information disorder. This new form cannot be typically categorized as disinformation, misinformation or misinformation, since it is first published on social media and does not represent a real product of the media that have an editing process before publication.
On the other hand, Marshall McLuhan, while talking about the meaning of the public message, through the media, makes it clear that in fact the medium itself is the message, so the person/profile/media that is publishing a content is the message. In his formula, when the person or public profile of a politician or public figure is the message itself, in texts generated with AI it is completely absent, since the message is no longer written by the individual, the medium is also absent. From what we have seen so far, content generated with this technology has been published by politicians, media, analysts, "status experts", opinion makers, influencers and PR officials. Therefore, in this case, we are talking about an exercise of the power of knowledge through discourse, seemingly correct, but essentially false since the author is absent.
Ethics in the use of AI
Although this issue has been regulated in legal and ethical terms in developed countries for years, in Kosovo, so far, there are only two institutions known to have publicly produced a specific code of ethics or updated current codes to include problems with AI-generated content. The Kosovo Press Council has added an article to its code of ethics that stipulates that content generated with AI by journalists and media outlets must be labeled as such, that is, the public must be told which part and by what means of AI it was generated. Also, despite the fact that certain parts can be generated with this technology, the responsibility for accuracy and other elements falls on journalism and the editorial staff. These practices are also typical of international organizations of journalists and reputable editorial staff. However, the practices and guidelines make it clear that no AI-generated content can be presented to the public as something written by a human. In this sense, the journalist experienced firsthand the consequences of using this technology without following ethical guidelines. Alex Preston, who was terminated from his contract by the New York Times after he used AI to write a book review.
Meanwhile, in Kosovo, the University of Pristina has also adopted a guide for Generative Artificial Intelligence, which uses the “traffic light model” to clarify what is prohibited, what this technology can help with, and what is allowed to be used. The aim of this guide is to regulate the use of this technology for staff and students, so it is seen as an auxiliary tool, but not as a substitute for the author of the content. However, in Kosovo there is still no other public institution that has made any decision on how to handle content generated with AI. Normally, regulation by public institutions should be done through sectoral approach and not centralized, given that this technology is applied in many sectors. In this regard, the European Union constitutes a good model, which has created a basic act through which it then regulates the different sectors, which has recently completely banned the use of this technology in official communications.