Built-in-Biases: Automated Hiring Processes are not yet Gender-Neutral

By  Victoria Trent

As the demand for talented employees increases, so does the pressure on recruiting teams. Talent acquisition professionals are continuously in search of enhanced hiring techniques. New automation technologies are becoming available which offer more efficient ways to modernize and streamline recruiting efforts. This is necessary for the purpose of giving recruiters more time to be creative and strategic. Recent employer trends have revealed that the most common use of data analytics and artificial intelligence (“AI”) is in hiring and recruiting. These programs may be relevant in all stages of the hiring process, from candidate sourcing and engagement, through scheduling and interviewing, to final employee selection.

Talent acquisition leaders say that the most difficult part of recruitment is identifying the right candidates from a large applicant pool. Today, machine-learning algorithms can prescreen hundreds of resumes in order to sort out the best candidates, which is what Amazon attempted to do. The company’s goal was to develop AI that could rapidly survey the web and spot candidates worth recruiting. For example, it attempted to target the top 5 resumes for hiring out of a pool of 100. This experimental hiring tool used AI to give potential employees scores ranging from one to five stars.

Though AI can help streamline processes, it falls short in substituting for certain human capabilities. Automation works best when used to enhance the work of people, not replace it. Thus, problems can arise when relying on advanced data techniques to grow a workforce.

After experimental use, Amazon quickly realized its new system was not rating candidates for software developer jobs and other technical roles in a gender-neutral way. These computer models, developed by a team in Edinburgh, were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Thus, Amazon’s system taught itself that male candidates were preferable, reflecting the historically male-dominated tech field. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” Not only was discrimination an issue, but the program also allegedly selected applicants who were unqualified for the job.

This artificially intelligent behavior is consistent with the kinds of responses experts anticipate from algorithms. Chief research analyst for the U.S. Equal Employment Opportunity Commission explains that “[i]f past decisions were discriminatory or otherwise biased, or even just limited to particular types of workers, then the algorithm will recommend replicating that discriminatory or biased behavior.”

Organizations such as the American Civil Liberties Union have drawn their attention to the issue, further pushing for algorithmic fairness. Critics of AI acknowledged it could be exceedingly challenging to sue an employer for discrimination over automated hiring, given the nature of the technology. For example, candidates might not know an automated hiring program was even being used.

Though this tool was never actually used by Amazon recruiters to evaluate candidates, these findings illustrate the challenges of machine-learning in automating the hiring process. This “case study”, per se, serves as an example to other large companies, such as Hilton Worldwide Holdings and Goldman Sachs Group, looking to implement automated hiring programs. While AI has come a long way since its conception, there is still more work to be done to ensure the fairness of such algorithms, and to prevent employment discrimination moving forward.

Student Bio: Victoria Trent is a second-year student at Suffolk University Law School and is a staff member of the Journal of High Technology Law. She holds a Bachelor of Arts in

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

 

Print Friendly, PDF & Email