AI Hiring Compliance Radar

Human review

"Human in the loop" is a claim, not an evidence file.

Vendor diligence should identify what the human reviewer actually sees, what they can change, and how the workflow records review before an AI-assisted hiring output shapes a people decision.

Review proof

Evidence to ask for before accepting the claim.

Placement

Where is the human step?

Map whether review happens before rejection, before advancement, after ranking, during appeals, or only during periodic quality checks.

Context

What does the reviewer see?

Capture whether the reviewer sees scores, rankings, summaries, explanations, source fields, limitation notices, or confidence indicators.

Authority

Can the reviewer override?

Ask whether reviewers can override AI outputs, what approvals are required, and whether the system nudges or constrains override behavior.

Logging

What gets recorded?

Document logs for review completion, overrides, reasons, escalation, candidate status changes, and vendor-side troubleshooting.

Training

What guidance exists?

Collect reviewer instructions, misuse warnings, role-based training materials, and support paths for uncertain or edge-case decisions.

Limits

Where should the tool not be used?

Ask for limitation statements covering unsupported job families, languages, geographies, data quality issues, and assessment contexts.