Jump to main content
Back to news18/03/2026

Don’t be fooled: eight ways to spot AI-generated images

If an image aligns with a user’s existing beliefs or emotional experiences, they are more likely to accept it as authentic. However, images generated by artificial intelligence can be identified even without technical tools, simply by interpreting tell-tale visual cues, according to a new study by Corvinus University of Budapest.
Budapesti Corvinus Egyetem

Márk Miskolczi, a researcher at Corvinus, set out to explore why social media users believe images that are in fact created by AI. The study was published in the March issue of Computers in Human Behavior. 

The findings show that AI-generated images are particularly effective when they appeal to emotions and simultaneously activate multiple cognitive biases, such as confirmation bias, anchoring, familiarity bias or groupthink. This creates a stronger and more difficult-to-resist persuasive effect than simple misinformation. Users often rely on mental shortcuts, known as cognitive heuristics, when quickly interpreting online images. However, this increases the risk of manipulation, as emotional impact can override critical thinking and lower users’ vigilance. 

The researcher first identified 146 images that were highly likely to have been generated by AI and analysed more than 9,000 related Facebook comments. The images were filtered based on warning visual signals, such as an unusual number of fingers, unreadable text, floating objects or unrealistically smooth textures, and then verified using an online AI detection tool. 

How to recognise AI-generated images 

The following eight visual criteria can indicate that an image is AI-generated, even without algorithmic assistance: 

“AI-generated images on social media are masters of emotional manipulation. The spread of images that appear real but are artificially created can, over time, undermine trust in both online platforms and artificial intelligence itself. This makes regulation urgent. For example, it would be essential to require clear labelling of AI-generated content on social media. At the same time, digital literacy needs to be strengthened across all generations, including learning how to apply the eight criteria for identifying AI-generated images,” said Márk Miskolczi, author of the study. 

Misleading nostalgia and comforting familiarity 

The most common themes of AI-generated images were nostalgia and emotionally engaging stories. Scenes often depicted elderly people, rural life or family relationships. Images featuring children in difficult situations, forgotten birthdays or unusual hobbies were also popular. Religious and spiritual motifs frequently appeared as well. 

For example, nostalgic themes (such as anniversaries or rural lifestyles) tend to exploit confirmation bias by triggering positive associations, while anchoring amplifies the initial emotional response. Most users accepted these images as authentic. Reactions typically expressed empathy, nostalgia or admiration, with many users sharing personal stories or offering comfort to those depicted. 

The study also found that automated accounts often appear around artificial content. These accounts reinforce the perceived authenticity of posts with inspirational messages, religious greetings or birthday wishes, which can further increase user engagement. Even among users who are sceptical about the authenticity of the images, many still interpret these bot-generated comments as human. 

Comments are likely to be AI-generated if they show patterns such as overly frequent or rapid posting, activity at unusual times, identical behaviour across multiple pages or groups, repetitive or generic phrasing that ignores context, lack of personalised interaction, vague or suspicious profile information, mismatched profile pictures, and a low number of friends or followers. 

The illustration is taken from the Corvinus research and shows examples of images generated by artificial intelligence.

Copied to clipboard
×