Building an Equitable Future for AI in the Nonprofit Sector
This guest post was written by an external author and contributes to the broader conversation about responsible AI in the social impact space, a topic that’s foundational to our Intelligence for Good® commitment.
If I say artificial intelligence (AI) is no longer a distant concept, wouldn’t you agree? From chatbots that answer donor questions to analytics that guide fundraising, AI is showing a new route to how nonprofits can work. For many organizations, this shift brings both excitement and unease.
In 2024, when the first AI Equity Project surveyed the sector, most nonprofits described themselves as curious but unprepared. By 2025, over 850 organizations across the U.S. and Canada participated in the project—showing a leap in awareness but revealing that readiness hasn’t caught up with enthusiasm.
That is where the conversation around AI equity becomes vital for you and me. AI’s promise means little if it reinforces existing inequities or excludes the very communities our sector aims to serve.
No, the question is no longer “Should we use AI?” but “How do we use it responsibly, equitably, and for everyone?”
What AI Equity Means—and Why It Matters
AI equity is the ethical development, deployment, and use of AI systems that prioritize fairness, inclusivity, and justice—especially for historically marginalized communities. It is achieved when technology is transparent, accountable, and participatory, with the goal of minimizing harm while maximizing shared benefits.
For nonprofits, AI equity is not just about what technology can do, but who it empowers. It asks questions such as:
- Who is represented in the data?
- Who benefits from AI decisions?
- Who bears the risk when systems fail?
Equity in AI requires more than adding diverse datasets or policies. It calls for a reimagining of how nonprofits engage with technology: slowly, thoughtfully, and in partnership with the people they serve.
A community artist reflected in the report, “We must reprogram the DNA.” Inclusion in AI is not a surface-level fix; it is about rewriting systems with care and community at the center.
The AI Equity Project: Why and How It Began
I launched the AI Equity Project to explore these questions through a sector-wide lens. The 2025 edition builds on the 2024 baseline to understand how nonprofits are learning, adopting, and governing AI tools. The study collected insights from 850 nonprofit professionals, roughly two-thirds in the U.S. and one-third in Canada, representing diverse missions, identities, and sizes. The data shows a sector in motion: cautious but experimenting, hopeful yet aware of risk.
Based on the findings, we could identify four themes that offer a roadmap for how our sector can approach AI with responsibility and heart.
Theme 1: Navigating Readiness in a Complex Landscape
AI readiness in the nonprofit world isn’t just about coding skills or data warehouses. It is about leadership confidence, ethical awareness, and equitable capacity.
While 65% of nonprofits expressed interest in AI, only 9% felt ready to adopt it responsibly when asked “By the way, how ready do you feel when wanting to adopt AI responsibly?” One leader put it plainly in the report: “Readiness isn’t about racing ahead with technology—it’s about building the leadership, policies, and practices that allow nonprofits to adopt AI with confidence and care.”
The research revealed that many organizations are “learning while adopting.” That means they are using AI tools—for writing, fundraising, or analysis—before formal governance or training exists. This adaptive learning mindset is encouraging but risky. Without clear guardrails, nonprofits could unintentionally perpetuate bias or harm communities they aim to protect.
The opportunity now lies in building shared readiness frameworks—spaces where nonprofits of this sector can learn collectively, test safely, and grow their comfort with both the tools and the ethics.
Theme 2: The Disconnect Between AI Purpose and Funding Language
The second major finding highlights a gap between why nonprofits want to use AI and how they are funded to do so.
Most organizations see AI’s potential to improve communication and reduce administrative load. Yet fewer than 7% have internal policies, and just 3.8% have dedicated training budgets. Even as funders express interest in innovation, few grants explicitly support ethical AI capacity-building or governance.
This mismatch leaves nonprofits struggling to articulate AI projects in the “language of funders.” As one leader reflected, “Maybe the real question isn’t whether nonprofits are ready for AI—it is whether they are resourced for it.”
Until AI readiness is tied to funding, nonprofits will continue to innovate without structure—a path that risks widening inequities. The solution, as the report notes, is to tie grants and reporting requirements to AI readiness assessments and equity commitments, not just outcomes.
Theme 3: Data Stewardship Expected Without Infrastructure
Data is the backbone of responsible AI. Yet many nonprofits—particularly smaller organizations—are still managing data manually or inconsistently.
More than half of respondents store data on personal devices or local spreadsheets. Others have minimal clarity on where their data lives or how it is secured. Without solid data infrastructure, equity practices remain aspirational.
This gap isn’t just technical; it is cultural. Human services and equity-focused organizations, for instance, often report low confidence in their own data equity practices—even when serving marginalized communities.
That is why the project emphasizes data stewardship as an equity practice. Responsible AI begins with responsible data. Clean, secure, and inclusive data systems build trust and reduce bias before any algorithm enters the picture.
Theme 4: Fear, Curiosity, and Dreaming Continues Together
Perhaps the most human theme in this research is the coexistence of fear and hope in the sector.
Nonprofits are deeply aware of AI’s risks: bias, misinformation, data exploitation, and environmental impact. Yet this fear is seen to be matched by curiosity—a desire to understand how AI might make work more humane and effective.
Respondents who serve BIPOC communities and people with disabilities were especially concerned with data protection and representation. Canadian participants voiced aspirations to “center community” in their use of AI. Across all demographics, there is an undercurrent of dreaming: a willingness to imagine better systems, even when the path forward is unclear.
As one reflection noted, “Fear and doubt thrive in silence, while clarity and confidence grow through shared learning.” The antidote to fear isn’t speed; it’s conversation.
This is a Collective Call to Action
AI is often framed as a race. But the AI Equity Project invites you and me to see it as a relationship—one that requires curiosity, courage, and care when building a relationship with AI.
For funders, this means investing in capacity, not just outcomes. For nonprofit leaders, it means publishing AI commitments and ensuring staff understand basic data equity. For technologists, it means co-designing tools that reflect nonprofit realities—small teams, limited budgets, and deep ethical stakes.
And for all of us, it means remembering that data is not abstract. It represents real lives and their stories. When handled with dignity, it can become a tool for justice.
The nonprofit sector has always led with values. The next chapter is about ensuring our values lead our technology too.
As the report reminds us: “We don’t have to know everything to begin—we just have to begin together.”
For more AI insights from Meena, download our expert essay collection eBook, The Forward-Thinking Nonprofit: Leading Through Change.
