Skip to content

AI lies – cyber expert warning on growing misinformation threat

While Artificial Intelligence (AI) is increasingly doing many things better than humans, a cyber expert at University of Gloucestershire is warning that this includes its ability to ‘lie’ and spread misinformation.

Sepideh Mollajafari (pictured below), a lecturer in cyber security at the University, says that although ChatGPT, Google Bard and other ‘Large Language Models’ (LLMs), also known as ‘chatbots,’ offer human-like conversation to help with tasks like writing emails, essays and computer code, the content itself can be wildly inaccurate.

Along with Sepideh’s ongoing analysis, new findings from Deloitte indicate that 26 per cent of UK adults, or 13 million people aged 16 to 75, have already used generative AI, with one in 10 also using it for work.

Sepideh Mollajafari

More than four out of 10 people believe AI chatbots always produce factually accurate answers, despite systems being prone to producing multiple errors and other concerns.

Examples of AI misinformation:

Warning the public to be cautious about placing their full trust in this new and rapidly evolving technology, Sepideh added: “Chatbots are often promoted as tools that will transform our personal and working lives.

“While there’s some truth to this, the hype around these powerful AI products can be misplaced if chatbots still produce inaccurate information, while worryingly looking like they’re telling the truth.

“These systems take in huge amounts of human-created data and then look for statistical similarities to link words together, ultimately predicting what comes next in a sentence. The result is a machine that persuasively mimics human language, but doesn’t think like a human.

“Our students are now taking on these challenges by learning how AI chatbots work, and how to use them effectively and ethically.

“Our own University of Gloucestershire policy on AI notes that students should always ‘act with academic integrity’ and also acknowledges that ‘while text generative AI services can be useful aids to study and can be used in classes by tutors, it is an offence to misrepresent AI-generated content as your own work.’

“AI chatbots are continuing to raise questions about how we all relate to and work with machines which can be highly effective at spreading misinformation.

“The public and students need to be on their guard and know how to use them to always find the right answers.”