Analytics

Artificial Intelligence and Disinformation: Opportunities and Risks in War Conditions

Navigation and useful materials

The attention of mankind today is focused on artificial intelligence (AI). The OpenAI company providing free access to its ChatGPT chatbot, the publication on social networks of numerous images created with the help of Midjorney and other neural networks, made this tool closer than ever to ordinary Internet users. This has actualized discussions about the risks and opportunities that artificial intelligence creates during information wars.

How Artificial Intelligence Helps in Working with Information

AI has great potential for creating and processing content. The Centre for Strategic Communication and Information Security uses the capabilities of AI to monitor the media space and analyze an array of online publications. We are talking about automated tools, in particular, SemanticForce and Attack Index platforms. 

Among the semantic analysis tools that SemanticForce uses is AI. It helps identify information trends, track changes in the response of users of social networks to information events, identify hate speech, etc. Another vector of neural network application is detailed image analysis, which allows for the rapid detection of inappropriate or harmful content. 

Attack Index uses machine learning (assessment of message tonality, source ranking, forecasting the development of information dynamics), cluster analysis (automated grouping of text messages, detection of plots, formation of story chains), computer linguistics (to identify established phrases and narratives), formation, clustering, and visualization of semantic networks (to determine connections and nodes, development of cognitive maps) and correlation and wavelet analysis (to detect information operations).

The available tools allow distinguishing between organic and coordinated content distribution, detect automated spam distribution systems, assess the impact on the audience of different social network user accounts, distinguish bots from real users, etc., using AI.

They can be used both to detect disinformation, analyze disinformation campaigns, and develop response and countermeasures.  

AI Potential to Create and Spread Disinformation

Neural networks demonstrate the improvement of their skills in creating graphic, textual, and audiovisual content almost every day. Its quality will increase considering the capabilities of machine learning. Today, popular neural networks are used by Internet users more like a toy than a tool for creating fakes. 

However, there are already examples of how the images generated by neural networks not only became viral, but also were perceived by users as real. In particular, the image of “a boy who survived a missile strike in Dnipro” or “Putin greeting Xi Jinping on his knees.”

These examples clearly demonstrate that the images created with the help of neural networks already compete with the real ones in terms of brightness and emotionality, and this will certainly be used for the purpose of disinformation. 

A study by the analytical centre NewsGuard, conducted in January 2023, found that the popular chatbot ChatGPT is able to generate texts that develop existing conspiracy theories and include real events in their context. This tool has the potential for automated distribution (using bot farms) of many messages, the topic and tone of which will be determined by a person, and the direct text will be generated by AI. Already today, with the help of this bot, you can create disinformation messages, including those based on the narratives of Kremlin propaganda, formulating appropriate requests. Counteracting the spread of artificially generated untruthful content is a challenge that we already had to be prepared to respond to.

Use of AI in War: What to Expect from Russians

The special services of Russia have extensive experience in using photo and video editing to create fakes and conduct psychological operations and are actively mastering AI. Deepfake technology is based on AI. It was used, in particular, to create a fake video message by President Zelenskyy about surrender, published in the information space in March 2022.

Given the poor quality of this “product,” the prompt reaction of state communications, the president, who personally refuted the fake, and journalists, this didn’t catch on. The video did not reach its goal either in Ukraine or abroad. But Russians are obviously not going to stop.  

Today, the Kremlin uses a huge number of tools to spread disinformation: television, radio, Internet sites, bloggers-propagandists who generate and promote content on Telegram, YouTube, social networks.  

AI has the potential to be used primarily for creating photo, audio, and video fakes, as well as for bot farms. AI can replace a significant part of the personnel in Russian “troll factories,” Internet warriors who provoke conflicts on social networks and create the illusion of mass support for Kremlin narratives by users.

Instead of “trolls” who write comments according to certain guides, this can be done by AI using keywords and the vocabulary offered to it. The influencers mentioned above (politicians, propagandists, bloggers, conspiracy theorists, etc.) have a decisive influence on the loyal audience, and not nameless bots and Internet trolls. However, with the help of AI, the weight of the latter can be increased by quantitative growth and “fine-tuning” for different target audiences.

In 2020, the Ukrainian government approved the “Concept for the Development of Artificial Intelligence.” This framework document defines AI as a computer programme, respectively, the legal regulation of the use of AI is the same as in other software products. So, it is too early to talk about any legal regulation of AI.  

The development of AI outpaces the creation of safeguards against its unfair and malicious use and the formulation of policies to regulate it. 

Therefore, the cooperation of Ukrainian government agencies with Big Tech companies in countering the spread of disinformation and identifying and eliminating bot farms should only deepen. Both our state and the world’s technological giants are interested in this. 

Centre for Strategic Communication and Information Security

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.

Navigation and useful materials

Spelling error report

The following text will be sent to our editors: