Ontario, Canada now requires employers to disclose whether artificial intelligence is used in the hiring process — right in the job posting. The EU AI Act imposes transparency requirements on high-risk AI systems used in employment. As algorithmic hiring tools proliferate, the intersection of AI and pay transparency is creating a new layer of compliance that most HR teams aren't tracking yet.
The AI disclosure requirement you may already be missing
Ontario's Working for Workers Four Act (2024) introduced a specific requirement: employers with 25 or more employees must disclose in job postings whether artificial intelligence is used to screen, assess, or select applicants. This applies to any AI-powered ATS feature, resume screening tool, or candidate ranking algorithm. If you use Greenhouse, Lever, Workday, or any major ATS with built-in AI features — and most now have them — this disclosure may be required for your Ontario-applicable postings.
This is currently Ontario-specific, but it represents the leading edge of a trend. Several US states have introduced similar legislation, and the EU AI Act's requirements for high-risk AI systems — which include AI used in employment decisions — are now in force.
The EU AI Act and employment screening
The EU AI Act, which became law in 2024 and is phasing in requirements through 2026, classifies AI systems used for recruitment, candidate selection, and employee management as "high-risk." This triggers specific obligations:
- Transparency to affected individuals: Candidates must be informed when AI is being used to make or assist in decisions that significantly affect them
- Human oversight: High-risk AI systems require meaningful human review of automated decisions
- Documentation: Employers must maintain records of how AI systems are used and their performance
- Non-discrimination testing: AI systems must be tested for bias before deployment and monitored for discriminatory outcomes
Why this connects to pay transparency
The connection is foundational: both pay transparency and AI transparency aim to prevent information asymmetry from perpetuating inequity. AI hiring tools trained on historical data can encode historical pay inequities — if the training data reflects that women were paid less for equivalent roles, an AI system optimising for "successful" hires may learn to disadvantage female candidates. Pay transparency requirements that mandate salary ranges create a paper trail that makes AI-driven pay discrimination harder to hide.
Pay equity audits — which several pay transparency laws require — are increasingly intersecting with AI audit requirements. Organisations that are already running pay equity analyses are better positioned to comply with AI bias testing requirements, because both use similar methodologies: looking for unexplained variance in outcomes across demographic groups.
What HR teams need to do now
- Audit your ATS for AI features: Most modern ATS platforms have AI-powered resume screening, candidate ranking, or "fit score" features. Know which features are active and which use AI.
- Update Ontario job posting templates: Add a disclosure sentence if you post roles accessible to Ontario candidates and use any AI hiring features.
- Review your EU hiring workflows: Any AI-assisted hiring for EU-based roles may now be subject to EU AI Act transparency and oversight requirements.
- Talk to your ATS vendor: Ask directly whether their AI features are covered by any relevant certifications or have been tested for bias. This will become standard vendor due diligence.
The trajectory
AI transparency in hiring is following the same trajectory as pay transparency: start in progressive jurisdictions (Ontario, EU), create cross-border compliance pressure for multinationals, gradually build toward a federal US standard. The companies building robust AI governance frameworks now will be ahead of the curve when the regulations arrive. The companies that are still treating "AI in hiring" as an IT issue rather than a compliance issue will be scrambling.