In the June 22, 2015 cover story for Time magazine, "Questions to Answer in the Age of Optimized Hiring," author Eliza Gray asks, “Are we truly comfortable with turning hiring–potentially one of the most life-changing experiences that a person can go through–over to the algorithms?” The answer should be no.
When you have algorithms weighing hundreds of factors over a huge data set, you can't really know why they come to a particular decision or whether it really makes sense. As Geoff Nunberg, who teaches at the School of Information at the University of California Berkeley stated in an NPR interview, “big data is no more exact a notion than big hair.”
Decisions made or affected by correlation are inherently flawed. Correlation does not equal causation, as demonstrated by Tyler Vigen on his website Spurious Correlations. For example:
- There is a greater than 99% correlation (0.992558) between the divorce rate in Maine and the per capita consumption of butter in the U.S. over the years 2000-2009;
- There is a greater than 78% correlation (0.78915) between the number of worldwide non-commercial space launches and the number of sociology doctorates awarded in the U.S. over the years 1997-2009; and,
- There is a greater than 66% correlation (0.666004) between the number of films Nicolas Cage appeared in and the number people who drowned by falling into a swimming pool over the years 1999-2009.
And what of the correlation between personality and job performance? In a 2007 article titled, “Reconsidering the Use of Personality Tests in Employment Contexts,” Dr. Neil Schmitt, the University Distinguished Professor at Michigan State University, wrote:
[A 1965 research paper found that] the average validity of personality tests was 0.09. Twenty-five years later, Barrick and Mount (1991) published a paper in which the best validity they could get for the Big Five [personality model] was 0.13. They looked at the same research. Why are we now suddenly looking at personality as a valid predictor of job performance when the validities still haven’t changed and are still close to zero?
If personality assessments are designed to find those employees with the best fit for the company culture, shouldn't the rising use of those assessments by employers over the past 10-15 years have resulted in a concomitant rise in employee engagement?
2014 Gallup poll, 51% of employees in the U.S. were "not engaged" in their jobs and 17.5% were "actively disengaged." These percentages have changed little over the fifteen years Gallup has been polling.
Gallup’s research shows that employee engagement is strongly connected to business outcomes essential to an organization’s financial success, including productivity, profitability, and customer satisfaction. Yet, the purported benefits of personality assessments have failed to move the needle on employee engagement, meaning companies have not received the promised productivity and profitability "bumps" from using personality assessments.
For the employer, the risks are at least two-fold. First, people who are “different” will be screened out, denying the employer the benefits that come from having a widely diverse group of employees. As Bock states in the article:
“I imagine someone who has Asperger’s or autism, they will test differently on these things. We want people like that at the company because we want people of all kinds, but they’ll get screened out by this kind of thing.”
The second risk for employers are the liabilities they face under laws like the Americans with Disabilities Act for using personality tests that screen out persons with disabilities, whether it be Asperger’s, autism, bipolar disorder, or other mental health challenges. The Equal Employment Opportunity Commission (EEOC) currently has two systemic investigations ongoing against employers that used personality tests in their pre-employment screening processes.
The 2014 White House report, “Big Data: Seizing Opportunities, Preserving Values," found that, "while big data can be used for great social good, it can also be used in ways that perpetrate social harms or render outcomes that have inequitable impacts, even when discrimination is not intended." An accompanying fact sheet warns:
As more decisions about our commercial and personal lives are determined by algorithms and automated processes, we must pay careful attention that big data does not systematically disadvantage certain groups, whether inadvertently or intentionally. We must prevent new modes of discrimination that some uses of big data may enable, particularly with regard to longstanding civil rights protections in housing, employment, and credit.
Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies (personality assessments and algorithmic decisionmaking) could be used to “digitally redline” unwanted groups, either as customers, employees, tenants, or recipients of credit. That is why we should not be comfortable with turning hiring over to the algorithms.