This report analyses how Large Language Models (LLMs) shape, and occasionally distort, geopolitical narratives, focusing on Kosovo and the Western Balkans within a broader global comparison. While these models’ chatbots enable fast, fluent, and large-scale content generation, they prioritise plausibility over truth, making them susceptible to how user questions are formulated (prompting) and to other forms of user interference.
Through 100 standardized prompts, 13 interviews with experts, and an experiment conducted in the Western Balkan countries, the study evaluates factual accuracy, semantic similarity, thematic biases, as well as the impact of the user’s location on the responses of these systems.
The main findings from the analysis of the three platforms, ChatGPT (launched in the US), DeepSeek Chat (launched in China), and Alice (launched in Russia) show that:
- ChatGPT, DeepSeek Chat, and Alice are not neutral: the responses reflect the ideological structures and data sources upon which they were developed and trained. ChatGPT and DeepSeek Chat offer the highest and most consistent accuracy, although they occasionally produce inaccuracies.
- Alice shows a visible influence, with more refusals to answer, ideological deviations, and use of the Russian language, especially in questions about sensitive topics such as Crimea and Srebrenica.
- The user’s location directly influences the tone, language, and narrative of the models, demonstrating contextual adaptation according to the country from which the request originates.
- In the semantic similarity analysis, ChatGPT and DeepSeek Chat show high similarity, while Alice remains visibly more distant.
The study confirms that LLMs cannot be considered stable, neutral, or independent sources for political and geopolitical issues. For this reason, the need for multiple verification, transparency in model training, as well as increased media literacy for users is emphasized.
Citation:
Action for Democratic Society (ADS) / Hibrid.info. (2025). The Chatbot Version of Truth: A Study on the Spread of Narratives Through AI Chatbots. https://doi.org/10.5281/zenodo.18279332