The Future of Advancement: Human Connection and AI in Education

Blackbaud’s recent eBook, The Forward-Thinking Nonprofit: Leading Through Change, offers insights from 19 sector experts. From navigating economic instability to embracing AI, this timely resource helps nonprofits and educational institutions adapt, grow, and thrive in times of uncertainty and change. 

Among these expert voices, Ali Mikaeili and Sue Cunningham focus on the education sector. They share their experiences and thoughts on how educators can balance innovation with human connection—especially as AI-powered tools become increasingly present in education.

In this blog, we explore AI in education, drawing insights from the essay collection eBook.

Advancement: Where Human Relationships Meet Innovation

Both education experts—Ali Mikaeili, a Prospect Research and Pipeline Specialist at Bow Valley College, and Sue Cunningham, the President and CEO of Council for Advancement and Support of Education (CASE)—write about AI and its role in the education landscape, highlighting how the recent popularity of AI tools also extends to education.

The focus on AI also leads both Ali and Sue to discuss the role of human connection in advancement. As Ali puts it, “Working in advancement has taught me one thing above all else: people are at the heart of everything we do. Whether it’s a quiet conversation with a long-time donor, a late-night push to finish a campaign, or the joy of telling a story that moves someone to give, the most important parts of our work are deeply human.” Both experts urge their peers not to lose sight of this as AI advances further and use cases in the sector expand. The challenge is to welcome the efficiencies and insights AI can unlock without sacrificing the trust, empathy, and lived understanding that define successful fundraising and alumni relations.

AI: Tool or Threat?

AI has been a debated—possibly even feared—addition to the education sector, but it’s evident that it can also bring great efficiencies to advancement teams. In his essay, Ali discusses this divide of opinions, “The arrival of generative AI has sparked so many mixed emotions in our field. On one hand, it’s exciting. On the other hand, it can feel unsettling and even threatening. We worry about what it means for our jobs, for our relationships, and for the personal touch that defines advancement.”

Advancement professionals are the stewards of institutional reputation and donor trust, and some worry that these values will be difficult to maintain if tasks and engagements are automated. As Ali cautions, “Advancement isn’t something that can be automated into spreadsheets and scripts. A transformational gift does not happen because an algorithm crunched the right numbers. It happens because of trust built over time, because of listening carefully, and because of finding shared values and common purpose.”

Despite these mixed emotions, ignoring the rise of AI is not an option. Both experts advocate for an intentional, ethical, and value-driven approach to adopting new innovations, including AI-powered tools. Ultimately, the promise of AI is to “clear away some of the repetitive and behind-the-scenes work so that we can focus on the part that matters most: building relationships.”

Charting a Strategic Path

As the President and CEO of CASE, Sue brings a global perspective to AI in advancement. In her essay, Sue explores how educational institutions can adopt innovations even when the idea of using AI may garner mixed emotions. “Artificial intelligence is no longer emerging. It has arrived and continues to evolve, at a rapid pace. And yet, many working to advance their institutions are still cautiously peering around the corner, wondering if it’s safe to step into the future,” she writes.

Sue sees that the stakes feel especially high for many advancement professionals—with concerns around authenticity, job security, and trust. While these concerns must be considered, Sue also argues that pausing in the face of these pressures comes with its own risks. Teams might miss out on opportunities to improve outcomes, free staff for more meaningful human work, or better connect with supporters, students, or alumni. “If we allow fear to halt progress, the greater loss will be the opportunities that pass us by,” she describes.

At the heart of both essays is a call to treat AI not as a threat or simply as a technical upgrade, but as “a strategic capability, one that spans communications, marketing, fundraising, alumni engagement, analytics, IT, and more.” To do so, Sue describes four essential pathways for AI adoption: exploring, experimenting, operationalizing, and scaling. She also underscores the importance of shared frameworks, ethical guardrails, and human oversight at every stage of AI adoption. Her advice to leaders and frontline professionals is practical and hopeful: start small, learn together, and “bring together those experimenting day to day and the leaders shaping strategy, so your institution isn’t merely adapting to new realities but helping to define them.”

Balancing Innovation and Connection

Both essays serve as a roadmap and inspiration for educational institutions seeking to adopt AI or expand their use of said innovation—and doing so without losing sight of the human side of advancement. Ali’s and Sue’s expertise and insights help us identify some key considerations that advancement leaders need to focus on when adopting and using AI.

  1. AI is a tool, not a replacement for your team. Advancement will always be grounded in human relationships, trust, and empathy. AI can be a useful tool for insight, helping achieve your goals and mission, but over-reliance on it can be costly.
  2. AI-ready leadership and culture matter. As Sue describes, “Readiness to adopt AI is as much cultural as it’s technical. Leaders and practitioners must create space for learning, encourage responsible risk-taking, and celebrate small wins to maintain momentum.”
  3. Ethics, policy, and oversight are non-negotiable. Both authors highlight the need for clear policy and ethical guardrails to ensure AI adoption is responsible and sustainable. This will help you maintain human oversight and safeguard trust. Your policy should also be reviewed as AI advances or you adopt new AI-powered tools.
  4. Start small and scale thoughtfully. Pilot projects, feedback loops, and intentional review processes allow your team to learn and adapt as you progress with AI use in your institution. This ensures that technology adoption is aligned with your long-term strategic goals—and not only driven by the latest sector trends.
  5. The end goal is a deeper connection. “AI will not replace these relationships. But it can, and must, free us to focus more deeply upon them. It will empower us to act with greater precision and relevance in a fast-moving world, offering more time for the work that matters most: direct connection,” Sue writes.

For more practical strategies and inspiration from Ali, Sue, and our other experts, get your free The Forward-Thinking Nonprofit: Leading Through Change eBook today, tailored to your region:

We surveyed education professionals twice in 2025 to gain insight into their fundraising performance, enrollment trends, risk readiness, and attitudes towards technology. For a summary of those findings, explore Fundraising for Education in the AI Era.