Artificial Intelligence As A Geopolitical Tool
Author:Marius-Cristian Neacșu, Erdem-Yuneis Eregep and Mihai Diaconescu
JEL:O36, Q55
DOI:10.24818/EA/2025/68/253
Keywords:generative artificial intelligence, fake-news, deep-fake, disinformation, war in Ukraine
Abstract:
This study is based on exploratory research to test the ability of artificial intelligence (AI) to shape human behaviour, testing on a recent geopolitical event, the ongoing Russian war in Ukraine. Thus, text, image and video were generated using artificial intelligence, tracking users' perceptions of fake narratives generated using artificial intelligence (in the case of text) and testing their ability to distinguish between synthetically generated models and real ones (in the case of image and video). Methodologically, three generative text models were used, namely Chat-GPT, Bing AI and Google Bard, and human perception was tested through a questionnaire. The results confirmed the ability of artificial intelligence (text generative models) to provide information in the domain of disinformation. Additionally, an average of one in ten respondents fail to identify automatically generated disinformation, about half failed to correctly identify an AI-generated image, and more than half have difficulty identifying an AI-generated video compared to the true video.