AI Hiring Compliance Radar

ATS evidence

If an ATS ranks candidates, ask for evidence before rollout.

Applicant tracking systems increasingly include AI-assisted screening, ranking, matching, and summarization. A short evidence request can turn a vague feature description into an operating record.

Five questions

Start with the facts a buyer can verify.

Inputs

What influences the result?

Ask which candidate, role, recruiter, or historical data fields influence scores, matches, rankings, summaries, or recommendations.

Decision role

What action can the output trigger?

Document whether the output advances, rejects, prioritizes, queues, flags, or merely organizes candidates for review.

Review

Where does human review happen?

Capture the reviewer role, what the reviewer sees, whether the AI output can be overridden, and how overrides are logged.

Evidence

What testing or audit exists?

Request audit summaries, validation notes, limitation statements, subgroup-performance notes, or other current evidence the vendor can share.

Change

What changed since renewal?

Ask whether AI features, model providers, ranking logic, data sources, or default settings changed since the last contract review.

Notice

What candidate-facing support exists?

Collect notice language, public documentation links, accommodation paths, and contact-routing guidance before launch.

Buyer workflow

Turn answers into a usable record.