“We shape our tools and thereafter our tools shape us.” Marshall McLuhan
For job seekers today, AI has become both gatekeeper and judge; screening CVs, matching candidates, and shortlisting profiles.
The promise? faster, fairer hiring
But something else happened: Algorithms started repeating the past, including its biases. Talented candidates were rejected, often silently. Recruiters missed out on great fits they never saw. And companies risked hiring based on flawed data, not potential.
With good intentions, we built the system. But now the EU has stepped in, asking, who’s watching the algorithm?
When AI Starts Reinventing Bias
AI is trained on existing data. That’s the problem.
If the past shows that certain genders, ethnicities, or job gaps were ignored or under hired, the algorithm learns to repeat it.
In one documented case, an AI hiring tool at Amazon downgraded CVs with the word “women”, because past hires skewed male.
Other times:
- Onderzoek van de University of Washington toonde aan dat AI-recruiters namen geassocieerd met “wit” 85% van de tijd bevoordeelden, terwijl namen geassocieerd met “zwart” slechts 9% kregen.
- Bloomberg testte ChatGPT met fictieve cv’s: Aziatische vrouwen werden 17,2% van de tijd hoog gerankt, tegenover 7,6% voor zwarte mannen – terwijl pariteit 12,5% zou moeten zijn.
- Een Harvard Business School-onderzoek meldde dat 88% van de werkgevers erkende dat hun AI-systemen gekwalificeerde kandidaten onterecht afwezen.
What else is happening?
- Non-European names trigger rejections.
- Career breaks (parenting, illness) are flagged as “inconsistent”.
- Passive candidates are missed because they didn’t use the “right” buzzwords.
That’s not a flaw. That’s a pattern.
The EU AI Act: Accountability Over Automation
The EU AI Act (effective 1 August 2024), wasn’t just about ChatGPT.
It was about protecting people from silent bias in critical life decisions, like hiring, healthcare, or credit scoring.
AI tools used for hiring now face new rules:
✓ They must be explainable; candidates deserve to know why they were rejected.
✓ Companies must have human oversight, no fully automated rejections in silence.
✓ Systems must avoid reinforcing bias based on age, gender, ethnicity, or gaps in resumes.
Build Smart. Build Human.
The future of hiring isn’t anti-AI, it’s pro-accountability.
It’s about transparent tools, intentional design, and ethical oversight.
At Linkrs, we believe technology should empower human judgment, not override it.
Curious how to build fair, effective hiring strategies in the AI era?
Read more in our blog or reach out, we’re here to help navigate the shift.