Amnesty International, a global human rights advocacy group, recently retracted artificial intelligence (AI) generated images used in its campaign to raise awareness about police brutality in Colombia during the national protests in 2021.
The organization was criticized for using AI to create the images for its social media accounts. The Guardian highlighted one image, in particular, which depicted a woman being dragged away by police during the protests against deep-seated economic and social inequalities in Colombia.
However, the images were found to have discrepancies such as uncanny-looking faces, dated police uniforms, and a protestor wrapped in a flag that is not Colombia’s correct flag.
The bottom of each image carried a disclaimer indicating that they were produced using AI. Media scholar Roland Meyer criticized the use of the images, stating that “image synthesis reproduces and reinforces visual stereotypes almost by default,” and that they were “ultimately nothing more than propaganda.”
AI-generated visual media is becoming increasingly popular, and in late April, Dave Craige, founder of HustleGPT, posted a video of the United States Republican Party using AI imagery in its political campaign.
Although AI has enormous potential, there are concerns about its use in generating images and videos that can spread misinformation and propaganda.
This case shows that even a human rights advocacy group like Amnesty International must be careful in its use of AI-generated content.