Use and perceptions of AI in humanitarian organisations
In a recent survey of humanitarian organisations conducted by Sphere:- 51% of respondents said that their organisation was using – or exploring the use of – Artificial Intelligence (AI);
- 57% had received no training or guidance on AI, or weren’t aware of any organisational policies on its use; and
- just 6% claimed to be highly familiar with the risks of AI.

To AI or not to AI?
AI has the potential for good in the humanitarian sector, but also has the potential to cause harm to vulnerable people. How can humanitarian organisations – or people within those organisations responsible for procurement – make good decisions on when, where and how to use AI? How can these organisations demonstrate to potential donors that they are using AI safely and responsibly? Or looking from the other side, how can donors ensure their potential grantees will use AI responsibly? How can we support humanitarians that want to benefit from AI to do so safely?AI Safety label
There are several possible approaches to answering these questions, including capacity building; standards and/or guidance; and certification. Sphere is currently working with Nesta (the UK’s innovation agency for social good), Data Friendly Space, and CDAC Network – supported by FCDO and the UKHIH on scoping an AI Safety Label. What if you could demonstrate that your organisation’s use of a particular AI platform in a particular context exceeds a reasonable safety threshold?A three-stage assessment process
Our AI Safety Label concept is built on three key components:- Technical assessment: Test the AI platform against metrics like performance, accuracy, usability, accuracy, bias, resource utilisation, transparency, explainability, latency, speed, etc.
- Organisational capacity assessment: Check that the humanitarian organisation intending to use the AI platform has the capacity to do so. For example, if the AI model requires personal or sensitive data, does the organisation take sufficient care over cybersecurity? Do staff have the correct training to use the AI platform?
- Social acceptability and risk assessment: A system that performs well in one context may be harmful in another, and one part of determining appropriateness in context is acceptance by the community that will be affected by decisions based on the outputs of the AI platform.
Sphere network consultation
We asked participants of our Global Focal Point Forum (GFPF) in Antalya to design what the label could look like, and about the feasibility of an organisational assessment of the capacities required to use AI safely. This session confirmed that there is great interest in AI in the sector but that relatively few organisations currently have the capacity to navigate the technical complexities, legal requirements and ethical considerations to make good decisions about AI.
Community testing in Hataya
To test stage three, we organised two workshops, in Turkish, with people from communities affected by the serious earthquake in February 2023. As part of this process, we asked participants how they felt about an AI platform using satellite imagery to estimate earthquake damage to their homes.
“I believe that AI can evaluate building damages without being influenced by emotions. In large-scale disasters like the Kahramanmaraş earthquake, humans are deeply affected, which could make their assessments less accurate.”We tend to set a very high bar for AI in terms of accuracy, lack of bias, etc., while simultaneously tolerating bias, emotional affectations, and political agendas in humans. But this is with good reason: If a human makes a mistake, they can be held accountable. If an AI platform makes a mistake, it may be difficult to determine who is responsible. For a more detailed account of the community engagement stage, read How to manage AI risks: a community-based approach to AI assurance (Nesta, Feb 2025)
Next steps and call to action
Our research has shown that the safety label concept resonates with humanitarians, and we’re now iterating the design to ensure it meets the varied needs across the sector. If your organisation is exploring the intersection of AI and community engagement and wondering how to use AI safely and responsibly, please get in touch.For media enquires or follow-up on the AI safety label, please contact communications@spherestandards.org