
AI: The Equalizer
Something odd is happening in hiring, education, and grantmaking: everyone looks great. Cover letters read like TED talks. Student essays feel polished and confident. Project proposals land with perfect structure, perfect tone, perfect “theory of change.”
It’s tempting to call this progress. But it also creates a new problem: when every submission clears the old bar, evaluation stops working.
The Equalizer
In The Equalizer, the story follows a character who intervenes when systems fail—when power is unevenly distributed and those with legitimate claims have no effective way to defend themselves. The narrative centers on restoring balance in situations where formal mechanisms no longer work.
The appeal of the character lies in this act of neutralization. By removing hidden asymmetries, the Equalizer allows just causes to be heard on more equal footing, without positioning himself as the ultimate arbiter of right and wrong.
The metaphor becomes clearer if we think about a real equalizer in audio engineering. An equalizer does not make every frequency identical; it adjusts levels up or down so that no single band overwhelms the others. Frequencies that are too low can be boosted, dominant ones can be reduced, and the result is not uniformity, but a sound that works better as a whole.
A real-world symptom: “all the projects seem excellent”
Recently I spoke with someone responsible for evaluating initiatives at a private non-profit focused on social inclusion, education, scientific research, health, and access to culture—especially for vulnerable groups. His issue was simple and painful: every proposal reads as strong. The traditional screening process (documents, narratives, objectives, KPIs) is no longer separating signal from noise.
This is not an isolated case. As generative AI becomes a normal part of grant applications and project submissions, proposals are easier to draft, easier to polish, and easier to align with evaluation rubrics—at scale.
What changed: we used to evaluate writing; now we must evaluate reality
For years, many selection systems quietly relied on proxies:
- Clarity as a proxy for competence
- Polish as a proxy for professionalism
- Structure as a proxy for strategic thinking
- Confidence as a proxy for leadership
AI doesn’t just help people express ideas. It helps them manufacture the proxy. That’s the equalization: not of outcomes, but of presentation.
Output is cheap; Accountability is not
If you’re screening people or projects today, the key is to stop asking “Is this well written?” and start asking “Is this owned?”
AI can generate a convincing plan. But it cannot supply:
- Skin in the game (who pays the price if it fails?)
- Constraints (what do you do when time, money, or partners break?)
- Trade-offs (who loses, what is sacrificed, what is postponed?)
- Track record (what did you actually do, under what conditions?)
- Responsibility (who will answer when reality disagrees with the narrative?)

So what should evaluators do?
Here are practical ways to restore signal—without turning the process into a witch hunt for AI usage.
1) Move from artifacts to interaction
Documents are now the least trustworthy part of the process—not because they are false, but because they are too easy to optimize. Add steps that require thinking in motion, ideally through synchronous interaction such as in-person conversations or live video calls.
- Live “problem framing” session: give a messy scenario and ask applicants to define the problem, not solve it.
- Trade-off interrogation: “If you had to cut 30% of the budget tomorrow, what breaks first and what do you protect?”
- Assumption audit: “Name the top 3 assumptions your project depends on. How would you test each one quickly?”
These are not “gotcha” questions. They’re reality questions. AI can help prepare, but it can’t fully replace a person’s relationship with constraints.
2) Make people defend what they removed (not what they wrote)
AI is great at adding. Great evaluators learn by asking about subtraction.
- “What did you decide not to do, and why?”
- “Which stakeholder will be disappointed by your approach?”
- “What would you stop doing after month one if the data contradicts your plan?”
Fluent writing is easy. Defensible omission is hard.
3) Use staged screening instead of one-shot perfection
One perfect proposal is cheap. Consistent thinking over time is not.
- Stage 1: short application (force brevity; limit space)
- Stage 2: request a revision after introducing a new constraint
- Stage 3: short conversation to reconstruct decisions
Over multiple steps, you learn who adapts, who owns the work, and who can stay coherent when reality shifts.
4) Accept AI use—but require disclosure and “process evidence”
Trying to ban AI usually produces two outcomes: honest people comply and get disadvantaged; everyone else just hides it. A healthier approach is to normalize AI use and ask for transparency:
- What tools were used?
- What prompts or inputs shaped the output?
- What human decisions were made because of (or despite) AI suggestions?
This aligns incentives: you’re not punishing tools; you’re rewarding ownership.
A note on fairness: the equalizer cuts both ways
There’s a beautiful side to all this. AI can reduce barriers for people who historically lost out because they didn’t have perfect grammar, elite coaching, or the “right” style of professional narrative. That’s genuine inclusion.
But inclusion requires new evaluation muscles. If we keep using old rubrics, we’ll end up funding the best-performing text generator rather than the most capable team—or the most grounded plan. And many grantmakers are already debating how AI will shape screening and decision-making.
In an equalized world, judgment becomes the scarce resource
When everyone can submit something polished, polish stops being evidence. The differentiator becomes what it always should have been: judgment under constraints, clarity about trade-offs, and responsibility for outcomes.
In other words: the new job of evaluators isn’t to detect AI. It’s to design selection processes that measure what AI can’t cheaply manufacture—accountability.





