Op-ed: The Usage of Modern AI Tools in Humanitarian Response
Tents are scattered among the widespread destruction in Jabalia, in the northern Gaza Strip, Sunday, Dec. 7, 2025. (AP Photo/Jehad Alshrafi)
In April 2017, Deputy Defense Secretary Bob Work established an Algorithmic Warfare Cross-Functional Team that launched Project Maven, an initiative designed to integrate artificial intelligence (AI) and machine learning into military intelligence and combat operations. 2017 was a crucial inflection point for AI technologies, defined by a transitional period from early development stages towards large-scale integration. Project Maven served as a pioneer for AI solutions in the warfare landscape. It now sits among the preceding 800 AI projects implemented by the Pentagon, 300 machine learning tools developed by the CIA, and numerous initiatives by international organizations spanning from bodies of the United Nations (UN) to various non-governmental organizations (NGOs).
Due to its growing spotlight on the international stage and groundbreaking use cases, AI technologies have been adopted by organizations such as the World Food Programme (WFP), International Rescue Committee (IRC), and United Nations High Commissioner for Refugees (UNHCR), which represent the growing adoption of such technologies in the humanitarian landscape. Humanitarian organizations have been particularly drawn to the efficiency benefit that AI solutions provide in crisis management contexts; however, a major concern lies in the looming uncertainty surrounding its ability to keep up with the ever-changing nature of humanitarian ethical standards.
On October 9, 2023, the Israeli military completed a “complete siege” on the Gaza Strip, leading to extensive destruction and displacement. From cutting off civilian access to food, electricity, and fuel supplies to the infrastructure collapse of local hospitals, communication systems, and transportation means, the humanitarian impacts of this operation were catastrophic. The conflict resulted in the killing of 9,000 people primarily by means of airstrikes, hunger, or disease, left 25,000 injured, and 70% of residents displaced. These statistics underscored the need for improved disaster mapping, which is where AI capabilities have presented an unprecedented edge in accurately attending to crisis zones.
One of the most renowned and heavily utilized capabilities of AI is its precise mapping of disaster sites. This precision is made possible by machine-learning AI models that process satellite images taken before and after an event, programmed to highlight geospatial differences between the two. This analysis is then retrieved by humanitarian organizations such as the United Nations Satellite Centre (UNOSAT) and used to identify areas in need of reconstruction and inform actionable strategies. In Gaza, these types of reports were able to track the precise locations of destroyed infrastructure and map out civilian displacement patterns, proving to be an essential tool for accurately delegating aid directives and overall improving the efficiency of the damage mitigation process.
However, the advantages of AI satellite image tracking do not come without parallel ethical concerns, mainly regarding the violation of civilian privacy rights. In recent years, the United Nations General Assembly (UNGA) has expressed concerns surrounding the rapid pace of AI technological development, with beliefs that it will inherently enhance the capacity of governments, companies, and individuals to undertake surveillance, interception, and data collection. The UNGA characterized “unlawful or arbitrary surveillance” as any “highly intrusive act” which violates privacy rights in non-consensual situations.
In cases of AI satellite imagery, current high-resolution technology satellites can identify features as small as 31 centimeters (about one foot), implying its ability to monitor the precise movement of individuals or groups, recognize faces, and generate detailed images of private property. However, an important line must be drawn between the credible usage and perceived exploitation of such sensitive data points. Adopting AI technology presents a nuanced risk for humanitarian organizations, who must carefully balance advantageous opportunity with moral caution; failure to consider ethical standards in practice may ironically threaten civilian welfare.
In addition to post-disaster relief, AI technology has been proven useful in providing pre-disaster aid to endangered communities using machine learning models in tandem with cloud-based data processing tools. For example, AI companies have designed software such as Google’s Flood Forecasting System, which analyzes weather patterns, and California’s Earthquake Warning System, which monitors seismic activity, to predict future occurrences of natural disasters. Such technology’s unprecedented accuracy in predicting notoriously erratic events has proven to be an invaluable asset for humanitarian organizations' proactive resource allocation initiatives.
Over the past few years, the UNHCR has been utilizing AI to build forecasting models that anticipate refugee movements, inform planning, and provide guidance for resource allocation. Their 2022 model, Project Jetson, which was built on climate, remittance, and market price data sources, functioned to predict levels of forced displacement in Somalia and preemptively respond to anticipated violence and conflict escalation accordingly. Similarly, the WFP has developed a model that projects food insecurity levels in international conflict zones with a mission of understanding the trajectory of and responding to anticipated cases of undernourishment.
Palestinians grab sacks of flour from a moving truck carrying World Food Programme (WFP) aid as it drives through Deir al-Balah in central Gaza, Saturday, Nov. 15, 2025. (AP Photo/Abdel Kareem Hana)
AI technology has served as the catalyst for a paradigm shift in humanitarian action: from reactive to anticipatory approaches. However, a major point of contention within this adoption has been the usage of outdated data points to construct forecasting models. Many AI models harness historical data, including climate records, conflict reporting, market prices, and satellite imagery, to inform their aid distribution. Presumably, a forecasting model built on obsolete data points poses a significant risk for the presence of inaccuracies and/or irrelevancies in these buildouts.
In fact, a devastating result of AI’s failure to accurately update mapping models was witnessed in the US’s “accidental” strike on a primary school in Minab, Iran. Following the incident, the US received an immense amount of backlash on an international scale for its blatant violation of humanitarian law and sloppiness in targeting efforts. This targeting data was later revealed to have been based heavily on AI forecasting models built on satellite imagery, displaying just how damaging the sole reliance on automated technology can be for civil populations.
Another threat posed by AI-built forecasting models, which contain obsolete data points, are their shortcomings in their ability to account for changes in human behavior, cultural norms, and local environment dynamics. Naturally, outdated models may carry biases in the form of racial, ethnic, or gender biases from previous times in regions that have redefined their societal norms. The high fluidity of political landscapes, social standards, and overall perceptions of reality within the humanitarian landscape raise ethical concerns, necessitating careful treading in the face of obsolete data. Without some degree of manual review, misinformation in AI solutions may slip under the radar of humanitarian organizations and perversely harm the communities they work to protect.