Technology has transformed the way we apply for jobs, not always for the better. A new Australian study from the University of Melbourne has raised red flags about the growing use of AI in recruitment, particularly when it comes to fairness, bias, and accessibility.
As AI-driven interviews become more common in corporate hiring, this research highlights the real and growing risks for job seekers—especially those with accents, speech-related disabilities, or from minority backgrounds.
The findings are clear: the systems behind AI recruitment tools are trained on data that doesn’t reflect Australia’s full diversity. One AI vendor cited in the research revealed that less than 6% of its training data came from Australia or New Zealand.
This data gap has real-world consequences.
The error rate for transcribing speech from U.S.-based English speakers sits at under 10%. But for non-native English speakers with international accents—especially those from China, that error rate jumps to as high as 22%. That’s a significant discrepancy, and one that affects hiring outcomes.
HR professionals interviewed in the study echoed these concerns. Many observed that AI tools often struggled to interpret speech from candidates with accents or disabilities. Despite vendor reassurances, no supporting evidence was provided to show these systems were truly “accent inclusive.”
Job candidates are being judged by systems that can mishear, misinterpret, and misjudge their responses—without explanation or recourse. Even recruiters are often in the dark, unable to explain why an AI tool rated one candidate higher than another. The process is opaque, and that lack of transparency raises serious ethical and legal concerns.
And while no cases of AI discrimination have yet reached the courts in Australia, that may only be a matter of time. AI-driven recruitment decisions could expose employers to breaches of discrimination laws—especially if no human oversight is in place.
For job seekers, it’s worth asking early in the recruitment process: Is AI part of the screening or interview? If you’re rejected and suspect an error or bias, request a human review and document your experience. This is especially important if you experience technical issues or inconsistencies during the interview.
For employers, this is a timely reminder to audit your recruitment tools. Vetting AI systems for local relevance, inclusivity, and accessibility should be standard practice—not a footnote. Employers should also ensure human review is part of any automated decision process, and track outcomes to identify patterns of exclusion or bias.
The promise of AI should be about expanding opportunity, not reinforcing systemic barriers. Fairness, transparency, and cultural awareness must remain at the heart of how we assess talent—especially in a landscape shaped by automation.
As the job market evolves, so must our approach to inclusion. Let’s ensure that progress doesn’t come at the expense of equity.