Skip to content

UoG senior lecturer warns of AI-based knowledge ‘pollution’

A University of Gloucestershire senior lecturer has undertaken research into the growing risk Artificial Intelligence (AI) poses to the integrity of human knowledge and how developing an AI Literacy can prevent humans from manipulation.

When engaging with online information, it is becoming increasingly difficult to tell fact from fiction, truth from lie and human from machine – and this has serious implications for society according to Dr Richard Cook, a researcher and senior lecturer in Cyber Security at UoG. 

Dr Cook’s paper, ‘The Social Construction of Nonsense’, presented at the “Minds and Machines: Artificial intelligence, Algorithms, Ethics and Order in Global Society” conference, outlines how anonymous AI systems are generating misinformation, disinformation and mal-information. It warns of the ramifications for social order and democracy when AI takes on the role of creator and author of information.

Dr Cook, from the School of Business, Computing and Social Sciences, said: “Our information ecosystem is being polluted by ‘nonsense’ – meaning that has been knowingly messed with – which is often mistaken for knowledge.

“Nonsense can be thought of in the same way as plastics in the seas – it is a form of pollution of human knowledge. AI is playing an active role in creating nonsense and it is sometimes intentional.”

Rather than focusing on apocalyptic visions of AI takeovers, Dr Cook has been researching the harm AI already presents. He warned that a human “truth bias” — a natural tendency to believe what we see or hear — is increasingly being exploited by AI. From virtual influencers spreading unverified claims, to unregulated podcasts that amplify biases or misleading fringe perspectives, all threaten the veracity of knowledge.

Dr Cook commented: “The more human-like AI appears, the more convincing it becomes — often irrespective of factual accuracy. Alongside this, a natural human tendency for ‘parasocial intimacy’ and ‘social proof’ intersect to create a powerful illusion of credibility in AI that opens us up to manipulation, deception and interference. Over time, this could weaken our shared understanding of reality, as people unknowingly accept untruths and act upon AI generated nonsense.”

Dr Cook is calling for stronger tools to assess the “health” of AI systems, similar to warning labels on food. He also pointed to the potential geopolitical risks of nonsense when it is weaponised by nation states

As AI heads rapidly towards being able to think and act on its own without human intervention, Dr Cook believes it is increasingly important for people to develop an AI Literacy that enables them to scrutinise information they consume to avoid manipulation by being ‘influenced’, persuaded or exploited by AI.

By assuming that some information online might be intentionally ‘nonsense’ and not human authoured, and seeking verified human perspective, humans can take the first step towards adopting a healthy scepticism when it comes to information we consume on the internet.

“AI development moves faster than regulation,” Dr Cook added. “There are currently no enforceable guidelines requiring AI developers to be accountable. Without scrutinizing what is happening now and taking individual action, the line between truth and falsehood may soon disappear altogether with serious implications for society and the canon of knowledge we leave behind longer term.”