If Knack's CEO admits that people make many decisions that are biased, what prevents the people at Knack from being biased in the creation, development and implementation of their games? Further, what prevents employers using Knack from being held liable for the biases of those games? The answer to both questions: Nothing.
While many companies foster an illusion that scoring/classification is an area of absolute algorithmic rule—that decisions are neutral, organic, and even automatically rendered without human intervention—reality is a far messier mix of technical and human curating. Both the datasets and the algorithms used to analyze the data reflect choices, among others, about connections, inferences, and interpretation.
The fact sheet accompanying the White House report warns:
As more decisions about our commercial and personal lives are determined by algorithms and automated processes, we must pay careful attention that big data does not systematically disadvantage certain groups, whether inadvertently or intentionally. We must prevent new modes of discrimination that some uses of big data may enable, particularly with regard to longstanding civil rights protections in housing, employment, and credit.Some of the most profound challenges revealed by the White House Report concern how data analytics may lead to disparate inequitable treatment, particularly of disadvantaged groups, or create such an opaque decision-making environment that individual autonomy is lost in an impenetrable set of algorithms. Please see Knack Testing Illegal Under ADA?
Workforce assessment systems like Knack's games, designed in part to mitigate risks for employers, are becoming sources of material risk, both to job applicants and employers. The systems create the perception of stability through probabilistic reasoning and the experience of accuracy, reliability, and comprehensiveness through automation and presentation. But in so doing, technology systems draw attention away from uncertainty and partiality.
While Knack's approach may help reduce an employer's hiring costs and may reduce the impact of overtly biased or discriminatory behavior, the inclusion of one or more potentially "defective components" in the assessments means that employers face the risk that a finding of bias or discrimination of a Knack assessment used by one employer will put all employers that use the assessment at risk. Please see When the First Domino Falls: Consequences to Employers of Embracing Workforce Assessment Solutions.
These "defective components" in assessments may be either design defects (i.e., the adoption and use of certain personality models) or manufacturing defects (i.e., coding errors in the assessment software). The latter is analogous to the coding error at 23andMe that resulted in notices going out to some customers informing them that they had a chronic and life-shortening condition when they did not. Please see On Not Dying Young: Fatal Illness or Flawed Algorithm?
Each day an employer continues to use the Knack assessment, there are more potential plaintiffs with claims against that employer. Labor and employment laws like Title VII and the ADA, permit an employer to use a third party like Knack to undertake the assessment of job applicants. The use of a third party, however, does not insulate an employer from any claims arising from the assessment usage. Under those laws, an employer is responsible (and liable) for any failures on the part of an assessment or assessment provider to comply with the provisions of those laws.
No Silver Bullet
Just as concerns about scoring systems are heightened, their human element is diminishing. Although software engineers initially identify the correlations and inferences programmed into algorithms, machine learning, predictive analytics, and big data promises to eliminate the human “middleman” at some point in the process.
As Hector J. Levesque, a professor at the University of Toronto and a founding member of the American Association of Artificial Intelligence, wrote:
the tendency to believe in a silver bullet for AI, coupled with the belief that previous beliefs about silver bullets were hopelessly naıve.We see this in the fads and fashions of AI research over the years: first, automated theorem proving is going to solve it all; then, the methods appear too weak, and we favour expert systems; then the programs are not situated enough, and we move to behaviour-based robotics; then we come to believe that learning from big data is the answer; and on it goes."
Similarly, employment assessment companies like Knack market the benefits of science, precision and data over the past fifteen years under the guise of neural networks, artificial intelligence, big data and deep learning, yet what has changed? Employee engagement levels have hardly budged and employee turnover remains a continuing and expensive challenge for employers. Please see Gut Check: How Intelligent is Artificial Intelligence?