Category: Civic Participation

How Does AI Exacerbate Disinformation and Pose a Challenge for Individuals With Limited Digital Literacy Skills?

19/10/2025 

Main author: William Cherry

Contributing researcher and editor: Dr Mikael Leidenhag

 

Background

In 2025, there were an estimated 5.41 billion social media users worldwide (Datareportal, 2025). It would not be controversial to suggest that social media has become ingrained in the lives and routines of billions of users around the world; the use of social media has drastically shifted in the past ten to fifteen years. Where it was once a platform for sharing experiences, photos, music and messaging, it has now become a source of news and information for many.

 

This transformation of social media as a source of news and information has coincided with the rise of two phenomena that have the potential to pose a grave threat to society, democracy and the world itself. These are disinformation and Artificial Intelligence (AI). This report will discuss the ways in which AI exacerbates the problem of disinformation through concepts such as algorithmic bias, deepfakes, bot accounts, and generative AI-created news and websites. Next, it will discuss how these threats pose a problem for digital users with limited media and digital literacy skills, and finally, how these challenges can potentially be overcome.

 

What is Disinformation?

Disinformation is not an umbrella term. It cannot be used interchangeably with misinformation or ‘fake news’. To this end, it becomes important to define these terms and briefly outline how they are used. Disinformation is the creation and dissemination of posts, websites, messages or news articles (among other things) with the specific intention to deceive readers or to distort information in strategic ways (Spitzberg, 2025). This differs from misinformation inasmuch as the information being shared is being done so without the intention to cause deception by the sender (Spitzberg, 2025). Misinformation is often seen as the byproduct of disinformation and is disseminated by those who have fallen for disinformation. Those posting disinformation often rely on individuals falling for the false or misleading information and sharing it amongst their social networks, leading to more exposure for the disinformation. With the number of social media users increasing every year, disinformation networks are able to gain wider exposure and be seen by larger numbers of people than ever (Datareportal, 2025). In recent years, disinformation actors have capitalised on the increased exposure they can gain and have monetised this through various means. The two ways most commonly seen are through advertisement within their posts or underneath posts on social media, or through the sale of products and merchandise based on the disinformation they are promoting. One of the most notable examples of this is Alex Jones and his website InfoWars. Between 2015 and 2018 InfoWars made $165 million through the sale of health products and merchandise based on misinformation surrounding vaccines and fluoride within the water systems of the USA (Moran et al., 2024). Moreover, misinformation can lead people to carry out criminal acts based on disinformation and misinformation shared across social media. There are two recent examples that demonstrate this fact. The first was the January 6 riots in Washington D.C, based on months of disinformation shared by then outgoing President Donald Trump and his supporters, who claimed that the 2020 presidential election was ‘stolen’ and led to the arrests of 1,583 and the conviction of 1,270 people (Parloff, 2025). The second example comes more recently from the United Kingdom and the 2024 Southport riots. Following the murders of 3 children, thousands of disinformation posts appeared online surrounding the immigration status and background of the suspect which lead to violent demonstrations across 27 towns and cities across the UK. During these riots mosques and hotels housing asylum seekers were attacked and lead to the arrest of 1,280 people (Downs, 2024).

 

These examples highlight the financial and social impact that disinformation can have upon societies across the world. The seriousness of this problem has recently been exacerbated by the development of, and increased access to AI in recent years.

 

Rise of AI and Generative AI

AI stands for artificial intelligence and is used for a wide range of reasons across a vast range of industries and sectors. The scale and range of uses of AI fall beyond the scope of this report; however, it will focus on the increased use of generative AI (GenAI) and Large Language Model AI (LLMs) within the sphere of online disinformation and misinformation. GenAI is software that is capable of creating computer-generated multimedia content that can simulate the characteristics of human-generated content purely by giving it a set of prompts (Jaidka et al. 2025). When analysing the effects of GenAI on dis and misinformation, there are 3 factors that make GenAI critical to the concept. First is that GenAI has the capacity to create convincing fake content. This means that false information can be disguised as a legitimate news source through the use of AI, despite the content being misleading or entirely false (Jaidka, et al. 2025).

Secondly, GenAI software has become vastly more accessible to online users and thus the amount of disinformation that can be generated through AI has rapidly increased in recent years. Prior to software such as ChatGPT or Grok, AI was limited to those who could programme software or understood the language/code needed to use such software. Today, anyone can generate content on sites such as ChatGPT, and there are many free-to-use services that can generate multimedia content. The ability to distinguish between human and AI-generated content is only going to become harder in the coming years. (Jaidka, et al. 2025).

 

Finally, proponents of AI claim that these GenAI and LLMs are built or ‘trained’ using billions of data points that are farmed from across the internet. Therefore, AI developers claim all the data used is true and factual. Despite this, there are numerous cases of data being flawed or flooded with biased data by malicious actors, such as the case of a Moscow-based group flooding the training data of an AI model in order to make pro-Kremlin propaganda more prominent within the outputs of the AI model (Sadeghi and Blachez, 2025). Within the training data, there may be examples of flawed or biased data that will have an impact on the output of LLMs and GenAI software (Saendia, et al. 2025).

All of this combines to create what Jaidka (2025, P.2) refers to as the ‘digital information paradox’: the ability to access, share and access information has never been greater; simultaneously, the trust in the news, media and information being consumed has never been lower. The expansion of GenAI has contributed greatly to both sides of this paradox.

Having outlined the concepts of mis and disinformation as well as examined the way AI has risen to affect these concepts in the present day, it becomes important to look at the impacts it may have, particularly on those with limited digital literacy skills. Before doing so, we must look at the impact of AI and GenAI on the exacerbation of misinformation and disinformation within the digital sphere.

 

How Does AI Exacerbate Disinformation and Misinformation?

Before examining the exacerbation of disinformation and misinformation, it is important to understand how online users judge and evaluate the validity, impact and truthfulness when they are presented with new information online. Sinderman, et al (2020) propose five key criteria that are deployed by individuals when judging the accuracy of new information. These are: the acceptance of the information by others, the amount of supporting evidence, the comparability of the information with their own beliefs, the general coherence of the information and the credibility of the source. As previously mentioned, GenAI can be used to replicate, manipulate and create pages, news articles and multimedia content that appears genuine and thus initially appears as legitimate by the reader. Often it takes a nuanced and trained eye to spot textual or visual cues that prove the information has been generated through AI, something that those without those skills may struggle to pick up on. AI can also be used to create Deepfakes, digitally altered or manipulated media (video or soundbites) that show people saying or doing things that have never happened in the real world (Mustak, 2023). One of the most notable examples of this was a video shared online that appeared to show London Mayor Sadiq Khan stating that pro-Palestinian marches would take precedence over Armistice Day celebrations in the city (Spring, 2024). Kahn suggested the clip was almost the cause of “serious disorder” in the city, yet was based on a completely AI-generated clip (Spring, 2024).

 

Furthermore, the use of ‘bot’ accounts is more commonly being used on sites such as X (formerly Twitter) to boost the engagement on disinformation posts and make them appear as though thousands of other accounts agree with the information being posted. It was estimated that during the 2016 EU referendum in the UK, up to a third of all traffic on Twitter (now X) was from bot accounts (Susskind, 2020). As mentioned at the start of this section, users will tend to agree with posts more often if they see that there are others who also agree. Using bot accounts to boost engagement on posts gives them the appearance of having widespread support, despite this support coming from fake AI-powered accounts.

 

AI is also used to shape the algorithm of content that makes up the feeds of social media apps, and is tailored to each individual user. These algorithms are trained to show posts with higher levels of engagement and interaction (due to perceived popularity) and to show posts that are related to content that the user has interacted with previously (Surawy and Lally, 2024). This can lead to the creation of Echo Chambers within social media networks and on sites such as X, Instagram, Facebook and TikTok. Jamieson and Capella (2008, P.76) describe echo chambers as “a bounded, enclosed media space that has the potential to both magnify the message delivered within it and insulate them from rebuttal […] this creates a common frame of reference and a positive feedback loop for those listening to, reading and watching the media”. When social media users interact with posts containing disinformation, seeing they have thousands of interactions (largely from bot accounts) and share these with like-minded users, they create echo chambers that reinforce the disinformation they have consumed and prevent them from critically analysing and evaluating the information they are consuming. This also leads them to reject genuine information as a result, especially when the information is coming from a source they no longer trust as a result of consuming disinformation (such as the police, government or traditional media sources) (Surawy and Lally, 2024).

 

GenAI takes advantage of the five criteria used to assess information when consuming and reading news on social media sites, as is commonly being done today. They can fabricate news entirely, make it appear as though this information is widely accepted and reinforce the disinformation through the creation of echo chambers and personalised algorithms. It becomes clear, then, that more needs to be done to tackle the issue and prevent people from committing illegal or dangerous acts as a result of disinformation.

 

The Impact of Disinformation on Users with Limited Digital Literacy Skills

The biggest impact of AI and disinformation has on online users who lack digital literacy skills is discerning between what is AI and what is genuine, authentic information. With the recent advancements of AI technology, distinguishing between a real video and a deep fake or a generated news article and a genuine piece has come down to minute details that are often not obvious at first glance. As previously stated, disinformation can lead to serious real-world consequences and outcomes, such as January 6, vaccine skeptics and the Southport riots. As such, it becomes essential to understand how and why more and more people are becoming susceptible to disinformation and how this can be slowed and eventually prevented.

 

Traditional digital literacy frameworks only partially address AI complexities such as deepfakes, algorithmic biases and echo chambers, and there is little in the way of solutions to these problems contained within these frameworks (Baskar, 2025). Users who lack digital literacy skills will often lack critical analysis skills needed to evaluate the prominence, origin, evidence and validity of a source of information they are reading and sharing online. Disinformation networks rely on this lack of skills so readers share the information without first properly evaluating and critically judging the information for themselves, often leading them to share partially or completely fabricated information amongst their followers and social networks. Through the sharing of such information, users slowly build and form echo-chambers, by receiving positive feedback from others surrounding the information and news they are sharing with them, they will slowly erode their trust in traditional and vital institutions that do not share disinformation (the BBC, Government, police, and electoral institutions) which can lead to numerous outcomes, such as the stoking of societal divisions or the incitement of violence and rioting when they no longer trust the aforementioned institutions (DCMS, 2021). In turn, these users will turn to alternative sources of information from sources that may be deliberately trying to deceive them for financial, political or malicious means.

 

Policy Recommendations

  • From a user perspective and a digital literacy perspective, there must be more of a focus on educating users to analyse sources of the information they are digesting online. This involves looking at the evidence and sources used within the information, who the author is, the website or company posting the information and the style of writing/coherence of the piece. This can be given during education for younger people (school I.T lessons) or through training courses for professionals across a wide range of industries and sectors. It is essential that this information is actually used by AI itself, and gaining hands-on practical experience, assessing outputs and spotting potential biases and flaws in the output of AI and GenAI software (Baskar, 2025).


  • Within this education and training, it is vital to highlight the implications that AI has from both a misinformation perspective and from a societal impact as well, highlighting that AI can be used to stoke tensions and is also having a profound impact on global warming and resource consumption.


There is clearly more that can be done by social media platforms to prevent misinformation from having real-world consequences as well. Increasingly, AI itself is being used to reverse engineer posts and highlight what has been generated by humans and what is AI-generated (Papageorgiou, 2024). These detection systems are vital for maintaining integrity in traditional news sources and preventing disinformation from spreading out of control across social media sites. There have been attempts to watermark AI-generated text and multimedia in the past; however, users often find ways to remove such watermarks from images and videos before they are posted (Jaidka, 2025). Therefore, it could be suggested that governments bring in more punitive measures to sanction and punish platforms that allow misinformation to be shared and hosted on their sites without any action being taken against those posting it, by fining these companies, it may lead them to take more decisive action against those spreading misinformation; however many claim this may spark free speech debates in the future.

 

Conclusion

As AI becomes more accessible, the number of posts, images, videos and articles being generated will only increase. The problem of AI-generated misinformation needs urgent attention before the technology used becomes too advanced to be controlled. New legislation, education, and awareness are needed to equip online users with the skills and information to highlight, evaluate, and report disinformation created through AI, thereby preventing misinformation from leading to dangerous consequences for individuals, groups, and society as a whole.

 

References

Baskar, F.R. (2025). Conceptualising Digital Literacy for the AI era: A framework for preparing students in an AI world. Data and Metadata, 4(530). [https://repository.usd.ac.id/52623/1/12254_DM_2025_530.pdf]. (Date accessed: 28/07/25).

 

Department for Digital, Culture, Media and Sport (DCMS) (2021) Online Media Literacy Strategy. London: DCMS. [https://www.gov.uk/government/publications/online-media-literacy-strategy]. (Date accessed: 5 August 2025).

 

Datareportal, (2025), Global Social Media Statistics, [https://datareportal.com/social-media-users]. (date accessed: 11/09/25).

 

Downs, W. (2024). Policing Response to the 2024 Summer Riots, House Of Commons Library, [https://commonslibrary.parliament.uk/policing-response-to-the-2024-summer-riots/]. (Date accessed: 10/09/25).

 

Jaidka, K., et al. (2025). Misinformation, Disinformation and Generative AI: Implications for Perception and Policy. Digital Government: Research and Practice, 6(1), pp.1-15. [https://dl.acm.org/doi/full/10.1145/3689372]. (Date accessed: 18/09/25)

 

Jamieson, K.H. and Cappella, J.N. (2008). Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. Abingdon: Oxford University Press.

 

Moran, R.E., Swan, A.L., and Agrarian, T. (2024). Vaccine Misinformation for Profit: Conspiratorial wellness influencers and the monetisation of alternative health, International Journal of Communication, 18(2024), pp.1202-1224, [https://ijoc.org/index.php/ijoc/article/view/21128/4494]. (Date accessed: 8/08/25)

 

Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A. and Dwivedi, Y.K. (2023) ‘Deepfakes: Deceptions, mitigations, and opportunities’, Journal of Business Research, 154, 113368. [https://www.sciencedirect.com/science/article/abs/pii/S0148296322008335 ].(Accessed: 10/09/25).

 

Papageorgiou, E., et al. (2024). A survey on the use of Large Language Models (LLMs) in Fake News, Future internet, 16(8), 298. [https://www.mdpi.com/1999-5903/16/8/298]. (Date accessed: 30/07/25).

 

Parloff, R. (2025) ‘The High-Water Mark of the Jan. 6 Prosecutions’, Lawfare, 6 January. [https://www.lawfaremedia.org/article/the-high-water-mark-of-the-jan.-6-prosecutions]. (Accessed: 05/08/25).

 

Sadeghi, M., and Blachez, I. (2025). A well-funded Moscow-based Global ‘News’ Network has infected western AI tools worldwide with Russian propaganda, NewsGuard, [https://www.newsguardtech.com/special-reports/moscow-based-global-news-network-infected-western-artificial-intelligence-russian-propaganda/]. (date accessed: 28/07/25).

 

Saendia, H.R., Hosseini, E., Lund, B., et al. (2025). Artificial Intelligence in the battle against disinformation and misinformation: a systematic review of challenges and approaches, Knowledge and Information Systems, 67, pp.3139-3158 [https://link.springer.com/article/10.1007/s10115-024-02337-7]. (Date accessed: 18/08/25).

 

Sinderman, C., Cooper, A., Montag, C. (2020)., A short review on susceptibility for falling for fake political news. Current Opinion in Psychology, 36, pp.44-48 [https://www.sciencedirect.com/science/article/abs/pii/S2352250X20300439]. (Date accessed: 18/09/25).

 

Spitzberg, B.H. (2025). The Four Horsemen: Disinformation, Misinformation, Fake News and Pseudoscience. Prologi- Journal of Communication and Social Interaction, 21(1), 40-52. [https://journal.fi/prologi/article/view/155840]. (date accessed: 18/09/25).

 

Spring, M. (2024) ‘Sadiq Khan says fake AI audio of him nearly led to serious disorder’, BBC News, 13 February. [https://www.bbc.co.uk/news/uk-68146053] (Accessed:11/09/25).

 

Surawy Stepney, E. And Lally, C. (2024), Disinformation; Sources, Spread and Impact, POST719, [https://researchbriefings.files.parliament.uk/documents/POST-PN-0719/POST-PN-0719.pdf]. (Date accessed: 28/07/25).

 

Susskind, J. (2020). Future Politics: Living together in a world transformed by tech. Abingdon: Oxford University Press.