I sat in a sales call last week with a prospect—a mid-size nonprofit in youth services. The vendor's pitch started exactly where every AI vendor's pitch starts: "Our system deflects 87% of inbound inquiries without human touch."
The nonprofit's director nodded politely. I felt my stomach turn.
Because here's what that number actually means: Out of 100 donors, volunteers, or program participants who reach out, 87 of them get an automated response and never talk to a human. The vendor was bragging about serving fewer people better. That's the opposite of what nonprofits need.
The Metric That Sounds Like Success
This is the gap I keep seeing between how the AI industry measures success and how mission-driven organizations actually define it.
When Efficiency Becomes a Failure Mode
Deflection rate is borrowed from corporate customer service. It works great for a bank, if your mortgage question can be answered by an FAQ, deflecting you is efficient and you're satisfied. But that math breaks down instantly in sectors where relationships matter more than transactions.
A nonprofit's donor calls with a question. The AI redirects her to the FAQ. She's been deflected. But what actually happened? She came with intent to engage, and we handed her a document. We didn't fail our metric. We failed her.
A hospital patient has a pre-op anxiety question at 10 PM. The AI gives a scripted response. We deflected. The patient feels less certain, not more. The metric worked. The experience didn't.
A university student is deciding whether to enroll. She texts in with a specific question about scholarships. The chatbot deflects her to a form. The student's intent—to feel seen, to know someone cares about her decision—goes unmet. We won the deflection game and lost the enrollment.
The Numbers That Actually Matter
I think about this differently now. The metrics that matter in sectors like nonprofits, healthcare, and education aren't about deflection. They're about trust signals.
Did the person come back? Did they ask a second question or stay engaged in the conversation? Did they complete the action you hoped they would—donate, schedule, enroll? Did they escalate less over time, which would suggest the communication is getting better, not worse? Did they refer others or recommend the organization? Did they feel respected enough to try again the next time they had a question?
What It Looks Like When You Get It Right
These are the numbers that predict actual outcomes: retention, lifetime value, referrals, completion rates, and—in mission contexts—donor loyalty, patient confidence, and enrollment.
The difference is subtle but total. Deflection is about pushing people away efficiently. Trust signals are about keeping people close and making sure they come back.
I watched a healthcare system redesign their patient communication around trust signals instead of deflection. They actually increased the volume of inbound messages because patients felt safe reaching out. But they decreased response time, increased first-contact resolution on substantive issues, and reduced unnecessary escalations because the AI learned what questions needed a human and what questions just needed clarity. They optimized for confidence, not volume.
The Question Worth Asking Your Vendor
That system hasn't seen the deflection metric improve. But their NPS went up, their patient churn went down, and their staff started filing fewer complaints about repeat questions because the communication loop was actually working.
The AI industry has it backward. They measure what's easy to count. But mission-driven sectors should measure what's easy to feel.
So here's my point of view: If your vendor is selling you deflection rate as the win, they're selling you a metric that works for a different business than yours. Ask instead: What signals prove your people trust us more than they did yesterday?
That's the only number worth tracking.
Sign up for our mailing list for insights, perks, and more!

.png)
