Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

WHAT ARE THE ETHICS OF USING AI GENERATED IMAGES FOR the marketing and communication of GLOBAL health?


Generative AI is reshaping communication, marketing and journalism. Now, instead of hiring a photographer, organisations can create photorealistic images by running text-to-image prompts in a matter of minutes. Approaching such a crossroads, global health — a field that relies heavily on the constant production and circulation of images — is faced with a difficult question: AI generate or not? With no definitive answer on the horizon, AI-generated images are already being used for global health marketing and communication, including by the World Health Organization. Compared to hiring a photographer, generative AI saves time and money and completely anonymises represented populations.

While this technology is rapidly developing, one thing is clear: generative AI doesn’t exist in a vacuum. Practically speaking, AI endangers jobs. In the aftermath of COVID travel restrictions, global health organisations departed from the ‘Westerners on the parachute photo assignments’ model and began actively working with local photographers. AI risks undermining the grassroots action to localise global health photography.

However, there are deeper and more insidious issues when it comes to AI image generation. The global health visual culture entrenched by Western organisations and photographers has been marked by coloniality, biases and exaggerations. In turn, because it uses the images produced by thousands of photographers, AI absorbs many of those abusive — and sometimes carefully staged — images and their stereotypes with regard to race, class, gender and locations, to produce something similar on demand. This begs the question: are AI-generated global health images really fake if they are replicated from real images and are meant, in practice, to replace them? These are some of the questions that must be answered in the service of decolonisation of AI and global health, and the future of visual communication.  


  • Generative AI saves a lot of money and time, and reduces carbon footprint since a photographer doesn’t have to travel locally or internationally.
  • AI-generated images guarantee anonymity of represented populations, especially if vulnerable. Compared to real photography, no consent from the subjects is needed.
  • AI can be trained and developed to reduce visual biases, for example by working with custom data sets.
  • AI images can be visually striking and may be effective and powerful tools for contemporary communication.


  • Generative AI risks replacing the work of local photographers, whose employment has been marked as a tangible step forward in the decolonisation of global health visual culture.
  • Generative AI departs from the tenets of photojournalism towards marketing and relies on incomplete and biased datasets, and blurs the boundary between real and fake.
  • Generative AI feeds on tens of thousands if not millions, of copyrighted online images, many of which were unethical both in terms of the content and consent from the subjects.
  • Generative AI delegates all ethical responsibility onto the AI and its algorithms, and there is no guarantee that AI can be trained to avoid reproducing biases, stereotypes and exaggerations 'inherited' from real images. 

what do you think?


Reflections before the storm: the AI reproduction of biased imagery in global health visuals Arsenii Alenichev, Patricia Kingori, Koen Peeters Grietens (2023) The Lancet Global Health