Social media and AI images: the study that shows how vulnerable we are to manipulated emotions

By @davideownzall1/19/2026hive-122315

image.png

Results of Selective coding. Credits: Miskolczi, M.

By chance, I happened to come across a study published recently in Computers in Human Behavior that analyzed how AI-generated images (AIGIs) on Facebook influence users’ reactions. It caught my interest because, in fact, having a Facebook account, it has by now become very common for me to come across that kind of content, clearly aimed at manipulating human reactions, and what the researchers did was observe hundreds of posts and thousands of comments, identifying five main categories: nostalgia and emotion, empathy and compassion, extraordinary feats and hobbies, birthdays and social injustices, and religious or spiritual themes. The most effective images, as one might imagine, were those that stimulated empathy and affectionate memories: elderly couples, neglected children, rural scenes of everyday life... All things that actually appear very often in the feed; it also happens to me to see the classic shot of parents with small children and then the same parents now elderly, or the farmer who lives a rural life and gets by well with little. The study also showed how this content exploits cognitive biases—confirmation bias, anchoring, familiarity—which reduce critical thinking and amplify emotional persuasion. If we see an image that makes us feel that the past was better or that a child deserves attention, we do not stop to doubt its truthfulness; the reaction-emotion starts immediately.

The question I asked myself was not “why did people fall for it?”, but rather “why did it work?”. The study gives an answer and suggests that the problem is not the technology itself, but the way we react. The images do not try to convince us through technical perfection (so much so that posts with blatant errors were analyzed, even in the fingers of the hands or overly smoothed faces) but through immediate and universal emotions: nostalgia, fragility, the need for connection. These are emotional shortcuts that the brain recognizes even before asking questions.

When something touches us on that level, critical thinking takes a back seat; we react viscerally, as human beings, without checking the details, whether the story makes sense. To all this is added the reinforcement of others through comments, likes, and shares. If we see hundreds of people reacting emotionally to a story, we tend to trust it without asking too many questions. Even a nonexistent narrative can acquire a sort of “collective truth”.

And indeed it has happened to me more than once to hear colleagues recount how they lingered on an image while scrolling through Facebook: a tender, melancholic scene, sometimes even moving... They leave a like, maybe a comment, and then move on... This only further fuels the cycle of manipulation.

Instinctively, one is led to believe that it is mainly older people, with their naivety and limited knowledge of technology, who fall for this trick, but instead the mechanism does not depend on age or background: it works on anyone. The greatest risk, if we continue down this path, is not being fooled once and then again and again, but becoming accustomed to the idea that nothing is reliable. That every image may be false, every story constructed. In the long run, this erodes trust.

In the end, this study teaches us that we must learn to be more aware of what we see... In an ideal world, manipulation should not exist or be so blatant, but we live in the real world, and in order not to become emotionally arid we must learn to pause, reflect, and understand whether we are faced with a true story or not. Only in this way can we interrupt this manipulation.

References
Miskolczi, M. (2026). The illusion of reality: How AI-generated images (AIGIs) are fooling social media users. Computers in Human Behavior, 176, 108876
https://doi.org/10.1016/j.chb.2025.108876

135

comments