close
close

Association-anemone

Bite-sized brilliance in every update

AI could help scale up humanitarian responses. But it could also have big drawbacks
asane

AI could help scale up humanitarian responses. But it could also have big drawbacks

NEW YORK (AP) — As the International Rescue Committee grapples with dramatic increases in the number of displaced people in recent years, the refugee aid organization has sought efficiencies wherever it can, including through the use of artificial intelligence.

Since 2015, the IRC has invested in Signpost — a portfolio of mobile apps and social media channels that answer questions in different languages ​​for people in dangerous situations. The Signpost project, which includes many other organizations, has so far reached 18 million people, but the IRC wants to significantly increase its reach using AI tools.

Conflicts, climate emergencies and economic hardship have led to increased demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. As humanitarian organizations face more people in need, they also face enormous funding shortfalls. The turn to AI technologies is partly driven by this massive gap between needs and resources.

To meet its goal of reaching half of displaced people within three years, IRC is building a network of AI chatbots that can increase the capacity of their humanitarian officers and local organizations that directly serve people through Signpost. For now, the project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It is based on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google.

The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems.

Beyond the development of these tools, the IRC wants to extend this infrastructure to other humanitarian nonprofits at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with technology companies or manage implementation risks.

“We’re trying to be really clear about where the legitimate concerns are, but we’re leaning into the optimism of the opportunities, and we’re also not allowing the populations we serve to be left behind in solutions that have the potential to scale in such a human way in man. or other technology cannot,” said Jeannie Annan, the International Rescue Committee’s chief research and innovation officer.

The answers and information Signpost chatbots provide are vetted by local organizations to be up-to-date and sensitive to the precarious circumstances people may be in. An example of a query IRC sent is of a woman from El Salvador traveling through Mexico to the United States with her son seeking shelter and services for her child. The bot provides a list of providers in the area it is located.

More complex or sensitive queries are escalated for people to answer.

The biggest potential downside of these tools would be that they don’t work. For example, what if the situation on the ground changes and the chatbot doesn’t know? It could provide information that is not only wrong, but also dangerous.

A second problem is that these tools can collect a valuable amount of data about vulnerable individuals that hostile actors might target. What if a hacker manages to access data with personal information, or if that data is accidentally shared with an oppressive government?

The IRC said it agrees with the technology providers that none of their AI models will be trained on the data the IRC generates, the local organizations or the people they serve. They also worked to anonymize the data, including removing personal information and location.

As part of the Signpost.AI project, the IRC is also testing tools such as an automated digital tutor and maps that can integrate many different types of data to help prepare for and respond to crises.

Cathy Petrozzino, who works for the non-profit research and development company MITER, said AI tools have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is that correct? Are data and privacy protected?

She also emphasized that organizations need to bring together a range of people to help govern and design the initiative – not just technical experts, but people with deep knowledge of the context, legal experts and representatives of the groups that will use the tools.

“There are a lot of good models in the AI ​​graveyard,” she said, “because they weren’t developed collaboratively and collaboratively with the user community.”

For any system that has potentially life-changing impact, Petrozzino said, groups should bring in outside experts to independently evaluate their methodologies. Designers of AI tools must consider the other systems they will interact with, she said, and plan to monitor the model over time.

Consulting with displaced people or others served by humanitarian organizations can increase the time and effort required to design these tools, but the lack of their input raises many safety and ethical issues, said Helen McElhinney, executive director of the CDAC Network. It can also unlock local knowledge.

People who receive services from humanitarian organizations should be told if an AI model will analyze any information they pass on, she said, even if the intention is to help the organization respond better. This requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where the responsibility for those decisions lies, she said.

Degan Ali, the CEO of Adeso, a nonprofit organization in Somalia and Kenya, has long been an advocate of changing the power dynamic in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to power outages for a week caused by Hurricane Helene in the US, chatbots will not help when there is no device, internet or electricity, she said.

Ali also warned that few local organizations have the capacity to participate in major humanitarian conferences where the ethics of AI are debated. Few have the senior staff and knowledge to really engage in these discussions, she said, although they understand the potential power and impact these technologies can have.

“We have to be extremely careful not to replicate power imbalances and biases through technology,” Ali said. “The most complex questions will always require local, contextual and lived experience to answer in a meaningful way.”

___

The Associated Press and OpenAI have a license and technology agreement which allows OpenAI access to part of the AP text archives.

___

Associated Press coverage of philanthropy and nonprofits gets support through AP’s collaboration with The Conversation US, with funding from the Lilly Endowment Inc. AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.