Publications, reports, and articles.

Generative AI for Humanitarians (2023)

Posted on:
November 4, 2023

Methodology, Scope & Audience

This paper is a compilation of findings from a literature review as well as key informant interviews with AI experts and humanitarians pushing the agenda for digital innovation in humanitarian action.

This paper is presented as a think-brief in order to help start a conversation or help provide a concrete stepping stone for those interested in topics of Generative AI. The paper is not intended to be interpreted or treated as an academic or peer-reviewed paper. Instead, it is a compilation of introductory research intended for a broad audience.

The paper is aimed at humanitarian practitioners and leaders who would like to gain a general knowledge on Generative AI or would like to gain insight on trending strategies for mainstreaming Generative AI tools within their organization. By providing main topics of concern and recommendations, we lay out the landscape of capabilities and potential pathways for safe and responsible adoption of Generative AI. Organizations can select key takeaways and narrow down investigations on each topic.

Key messages

  • AI adoption is not just a tech challenge, but an accountability and data governance issue. Humanitarians should integrate technological innovation with a commitment to transparency, accountability, and robust data governance across expert and generic AI tools.
  • Generative AI tools provide a multitude of opportunities and use cases for large scale projects as well as generic organizational workflows.
  • Effective adoption requires critical attention to task-specific applications as well as technical shortcomings and risks of generative AI tools especially those related to inaccuracies, disinformation, bias, and privacy of sensitive information.
  • We recommend 10 Rules of Thumb as guardrails for safe and responsible use of Generative AI tools. Following these rules will ensure organizational integrity and accountability.
  • Working with AI instead of AI-as-a-humanitarian aid worker, Human-centric AI should be at the forefront of organizational approaches to enhance and augment human capabilities rather than replacing them in humanitarian operations.
  • Humanitarian organizations should cultivate an AI ecosystem conducive to the responsible exploration of AI technologies. This includes strategies for sandboxing and adhering to rigorous standards for data handling, AI tool usage, and the analysis of resultant outcomes.
  • To ensure good practices around Generative AI, humanitarian organizations should promote AI literacy within their ranks, providing opportunities for upskilling and continuous professional development in AI-related fields.
  • Humanitarian organizations must uphold high standards of data governance to enhance the quality and diversity of data feeding Generative AI applications. International organizations should support local actors and country officers to enhance data quality and analytics.
  • Verifying AI generated outputs is a critical step in adoption strategies. Humanitarian organizations should develop strategies for authentication and accuracy of AI outputs. Automation of these processes for responsible deployment of Generative AI across organizations should also be explored.
  • Specific operational guidelines and governance strategies are needed to safeguard humanitarian data. Humanitarian organizations must develop and operationalize strategies for data protection and privacy principles.

Summary

This paper is a compilation of findings from a literature review as well as key informant interviews with AI experts and humanitarians pushing the agenda for digital innovation in humanitarian action.
arrow-circle-upenter-downmagnifier