In: Psychology
More and more companies are turning to computerized systems to filter and hire job applications, especially for lower-wage, service-sector jobs. The algorithms these systems use to evaluate job candidates may be preventing qualified applicants from obtaining these jobs. For example, some of these algorithms have determined that, statistically, people with shorter commutes are more likely to stay in a job longer than those with longer commutes or less reliable transportation or those who haven’t been at their address for very long. If asked, “How long is your commute?” applicants with long commuting times will be scored lower for the job. Although such considerations may be statistically accurate, is it fair to screen job applicants this way? Why?
Screening candidates using algorithms may be statistically accurate, however, it is not ethical for some deserving job seekers who sometimes lose their jobs on the basis of pre set algorithms, despite being talented and educated.
Many times, such rejections on a continuous basis leads to losing of motivation on the part of the job seeker and he/she even decides to give up on such proceedings. There has to be certain ethical standards that Artificial Intelligence and Algorithms must follow. Also, the engineers working behind such algorithms could be making certain errors or oversights in gathering their input data leading to unintentional bias. This continual screening out might lead to candidates feeling disempowered.
In the turn of events, it could also happen that algorithms might screen out candidates who are over qualified but extremely interested. There should therefore be more discussions on algorithmic transparency and accountability that can challenge the issue of discrimination at recruitment.
A balance must be found between AI and recruitment in order to encourage better candidate fits, fairer interview screening and increase overall efficiency.