LSE - Small Logo
LSE - Small Logo

Helen Ainsley

March 26th, 2021

Recruiting in a digital age: pitfalls and strategies for automated hiring

0 comments

Estimated reading time: 5 minutes

Helen Ainsley

March 26th, 2021

Recruiting in a digital age: pitfalls and strategies for automated hiring

0 comments

Estimated reading time: 5 minutes

Written by: Stephanie Sheir

The proliferation of AI has streamlined decision-making in many industries, but has incurred unique costs, creating new opportunities and risks for businesses. The use of large datasets and algorithms to streamline decision-making in the job market is an exemplar of this phenomenon. These algorithms assist recruiters in sorting through candidate pools and purport to remove human bias from the selection process. Hiring algorithms can certainly improve recruitment, but may also entrench biases both new and old as a result of algorithmic emphasis on statistical correlations. The rational, quantitative quality of algorithms can lead to both new opportunities for efficiency and new risks of insidious bias and discrimination in the labour market. Some of the problems to be examined are technologised reiterations of human bias issues, while others are fundamentally new ills birthed by algorithmic means. A few regulatory measures to mitigate these risks, namely data protection and transparency measures, will also be examined. 

Automated hiring processes come with clear benefits. Algorithms designed by third parties make the hiring process more efficient, typically on the demand side. Tools include recommender engines that take into account both candidate and recruiter preferences, as well as the filtering of candidates based on automated testing and the algorithmic evaluation of CVs, cover letters and video interviews (Sánchez-Monedero et al., 2020). The evaluation metrics are typically not new and are based on traditional assessments of would-be employees, but are done in an automated fashion without human input until later stages in the recruiting process. Automation allows for much greater efficiency, allowing recruiters to scan large pools of candidates in a much quicker fashion (Bîgu and Cernea, 2019). 

Algorithms also contribute much more than scale. In a process often referred to as Big Data analytics, the most important function of algorithms is not simply a question of scale, but one of prediction. Algorithms can be trained using datasets of existing employees to discover statistical correlations that predict success; crucially, these predictions are found using artificial analysis of large datasets that would not be possible under human analysis. The found correlations can then be applied to the data of potential candidates to make predictions about which individuals are most likely to be successful, based on the characteristics of top employees (Costa et al., 2020). This process allows recruiters to discover previously-unknown or unknowable determinants of success, to then operationalise these patterns as requirements for new potential employees. Use of such recruiting algorithms can not only make the hiring process faster and less laborious, but also more efficient in its search for the best candidates, based on proven employee excellence. 

Despite these potential benefits, hiring algorithms can promote biased recruiting even as their quantification of human success purports to be debiasing and objective. The first problem is that of proxy discrimination. Direct discrimination based on employer prejudice is a longstanding human phenomenon that is outlawed under a human-rights framework. Algorithmic discrimination is different because it does not discriminate based on employer prejudice towards a certain group, as human interference, at least in the early stages of the recruitment process, is nonexistent. Proxy discrimination occurs when protected characteristics such as race and gender, which traditionally cannot be used to determine the selection process, are inferred from other pieces of data collected about the candidates (Williams et al., 2018). Even if this data is not collected, proxy data such as postcodes, which can serve as predictors of race given ethnic geographic clustering, serves to implicitly encode protected variables into the data. When an algorithm makes a decision about candidate recommendations based on proxy variables, such as recommending candidates that live closer to the workplace, biased outcomes can result, even if bias was never intended or explicitly built into the algorithm. Proxy discrimination is an example of statistical discrimination, when bias is acted upon due to arbitrary correlations found between attributes that predict success. Of course, it is understood that employers wish to maximise success and should thereby search for these correlations to make better hiring decisions. 

However, the danger of using algorithms to automate this process is that discrimination can be perpetuated unknowingly, if the training datasets used harbour hidden correlations that would result in the algorithm erroneously concluding that proxies for protected characteristics should influence the hiring process (Costa et al., 2020). For instance, a resume scanning algorithm deployed by Amazon rated non-white and female candidates as less potentially successful, not because the algorithm designers programmed racism directly into its code, but because the existing employee pool analysed by the algorithm to define success in Amazon’s workplace were largely men of European descent (Goodman, 2018). The algorithm concluded that it was these factors which correlated with success and made recommendations on this basis, resulting in a biased scoring system that discriminated against minorities. 

A second problem of algorithmic discrimination is the lack of transparency (Costa et al., 2020). Hiring discrimination is already notoriously difficult to prove, with the burden of proof resting on the claimant to prove differential treatment from other candidates on the basis of a protected characteristic. The black-boxed, proprietary nature of algorithms results in obscurity due to trade secrets as well as, at times, ignorance about the algorithm’s exact logic even by the developers themselves. It is thus doubly difficult for individuals to challenge algorithmic hiring decisions. While other sectors address labour rights through unionisation, litigation and regulation, workers in more precarious sectors have less bargaining power, less collective action capacity and less knowledge of their labour rights. 

It should be understood that this discrimination is fundamentally unintentional (Bîgu and Cernea, 2019). Algorithms, at least when they are not deliberately used to exclude groups, should not be thought of as machines that simply further or exacerbate existing human biases. Rather, their reliance on statistical correlations can result in discriminatory outcomes that we are sensitive to by way of historical concerns for the protection of minorities against majority privileges. The presence of unfair outcomes is the relevant factor, not active discrimination or prejudice. With this in mind, it should be understood that algorithmic discrimination is not pernicious in the same way that human discrimination is. A return to purely human evaluation of candidates would potentially worsen the problem of bias, as direct discrimination and implicit biases based on anything from the candidate’s race to their tone of voice would once again be the sole mechanism of recruitment selection. 

It should be accepted that some level of hiring bias may always exist, but the use of algorithms can as much as possible seek to rationalise the process, removing human biases and mitigating computational biases (Raghavan et al., 2019). It is better to have biases be rendered in a quantitative fashion so auditing and oversight can more easily take place, then within the even darker black-box of the human brain. It is important to note that subjective candidate selection, unrepresentative samples of past employees and ill-defined notions of employee success are all problems that are at least equally present in human hiring processes, but unlike algorithms, humans cannot as easily surrender their internal programming for external oversight and correction. 

This possibility for algorithmic correction is found in many regulatory solutions, two of the most prominent being data protection legislation and corporate transparency measures. The GDPR provides robust protections for those subject to automated data processing, with great potential for redress (Sánchez-Monedero et al., 2020). For instance, under Article 22, candidates could request meaningful data about the logic involved in algorithmic decisions, which could form the basis for litigation and legal challenge. The need for information provision would also serve as an incentive for companies to develop explainable algorithms whose decision-making logics can be justified in court. Under Article 35, which requires fairness in high-risk data processing, companies could also be compelled to analyse datasets and outputs to root out the causes of biased outcomes, be it biased trining data or proxy discrimination based on statistical correlations (Hacker, 2018). Bias detection mechanisms could be mandated as a way of safeguarding data subjects’ rights under Article 22. These policies of data protection would not only facilitate redress, but would also create the greater transparency needed for oversight of algorithms to discover patterns of discrimination and to address them at multiple stages in the development process, whether by altering training data or by inputting fairness into the processing stage. 

It is clear that the challenges for improving the fairness of algorithms are great. The problems wrought by algorithms, however, are also in some sense better than the problems of bias that have existed before and continue to exist now in human-only hiring procedures. Discriminatory outcomes brought about by algorithms can be scrutinised and corrected in a transparent, logical way, which is more that can be said of discriminatory outcomes wrought by human decision-makers. While data protection and transparency are two of the most prominent corrective mechanisms, there are many policies yet undiscussed that can refine the process further, and input more fairness into recruiting than there has been in many years past. 

References 

Bîgu, D., Cernea, M.-V., 2019. Algorithmic bias in current hiring practises: an ethical examination 6. 

Costa, A., Cheung, C., Langenkamp, M., 2020. Hiring Fairly in the Age of Algorithms 33. Goodman, R., 2018. Why Amazon’s Automated Hiring Tool Discriminated Against Women. ACLU. Hacker, P., 2018. Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review 1143–1185. 

Raghavan, M., Barocas, S., Kleinberg, J., Levy, K., 2019. Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. arXiv:1906.09208 [cs]. https://doi.org/10.1145/3351095.3372828 

Sánchez-Monedero, J., Dencik, L., Edwards, L., 2020. What does it mean to “solve” the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Presented at the FAT* ’20: Conference on Fairness, Accountability, and Transparency, ACM, Barcelona Spain, pp. 458–468. https://doi.org/10.1145/3351095.3372849 

Williams, Brooks, Shmargad, 2018. How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications. Journal of Information Policy 8, 78. https://doi.org/10.5325/jinfopoli.8.2018.0078 

Image from: https://resources.workable.com/tutorial/linkedin-job-posting

 

Print Friendly, PDF & Email

About the author

Helen Ainsley

Posted In: Articles | Featured

Leave a Reply

Your email address will not be published. Required fields are marked *

Top Posts & Pages