Monday, August 25, 2014

Sound and Fury, Signifying Nothing

Incorporating elements of gamification, big data, machine learning, and predictive human analytics, Knack is a veritable buzzword oasis. According to Knack, their games are designed to test cognitive skills that employers might want, drawing on some of the latest scientific research. These range from pattern recognition to emotional intelligence, risk appetite and adaptability to changing situations.

John Funge, Knack's CTO, states that "we have used our games to infer cognitive ability, conscientiousness, leadership potential, creativity as well as predict how people would perform as surgeons, management consultants, and innovators." In an Economist article, Chris Chabris, a Knack executive, states that games have huge advantages over traditional recruitment tools, such as personality tests, which can easily be outwitted by an astute candidate. Many more things can be tested quickly and performance can't be faked on Knack's games, he says.

Gary Halfteck, Knack's founder and CEO, says playing a video game can be a better representation of who you are and your skill sets than an employer might get in a one-on-one conversation. "As people, we make many decisions that are biased, whether it's consciously or subconsciously, and we have no good tools to assess and evaluate, let alone predict, what one's potential is," he says.

If Knack's CEO admits that people make many decisions that are biased, what prevents the people at Knack from being biased in the creation, development and implementation of their games? Further, what prevents employers using Knack from being held liable for the biases of those games? The answer to both questions: Nothing.

Algorithmic Illusion

While many companies foster an illusion that scoring/classification is an area of absolute algorithmic rule—that decisions are neutral, organic, and even automatically rendered without human intervention—reality is a far messier mix of technical and human curating. Both the datasets and the algorithms used to analyze the data reflect choices, among others, about connections, inferences, and interpretation.

The recent White House report, “Big Data: Seizing Opportunities, Preserving Values," found that, "while big data can be used for great social good, it can also be used in ways that perpetrate social harms or render outcomes that have inequitable impacts, even when discrimination is not intended."

The fact sheet accompanying the White House report warns:
As more decisions about our commercial and personal lives are determined by algorithms and automated processes, we must pay careful attention that big data does not systematically disadvantage certain groups, whether inadvertently or intentionally. We must prevent new modes of discrimination that some uses of big data may enable, particularly with regard to longstanding civil rights protections in housing, employment, and credit.
Some of the most profound challenges revealed by the White House Report concern how data analytics may lead to disparate inequitable treatment, particularly of disadvantaged groups, or create such an opaque decision-making environment that individual autonomy is lost in an impenetrable set of algorithms. Please see Knack Testing Illegal Under ADA?

Systemic Risk

Workforce assessment systems like Knack's games, designed in part to mitigate risks for employers, are becoming sources of material risk, both to job applicants and employers. The systems create the perception of stability through probabilistic reasoning and the experience of accuracy, reliability, and comprehensiveness through automation and presentation. But in so doing, technology systems draw  attention away from uncertainty and partiality.


While Knack's approach may help reduce an employer's hiring costs and may reduce the impact of overtly biased or discriminatory behavior, the inclusion of one or more potentially "defective components" in the assessments means that employers face the risk that a finding of bias or discrimination of a Knack assessment used by one employer will put all employers that use the assessment at risk. Please see When the First Domino Falls: Consequences to Employers of Embracing Workforce Assessment Solutions.

These "defective components" in assessments may be either design defects (i.e., the adoption and use of certain personality models) or manufacturing defects (i.e., coding errors in the assessment software). The latter is analogous to the coding error at 23andMe that resulted in notices going out to some customers informing them that they had a chronic and life-shortening condition when they did not. Please see On Not Dying Young: Fatal Illness or Flawed Algorithm?

Each day an employer continues to use the Knack assessment, there are more potential plaintiffs with claims against that employer.  Labor and employment laws like Title VII and the ADA, permit an employer to use a third party like Knack to undertake the assessment of job applicants. The use of a third party, however, does not insulate an employer from any claims arising from the assessment usage. Under those laws, an employer is responsible (and liable) for any failures on the part of an assessment or assessment provider to comply with the provisions of those laws.

No Silver Bullet

Just as concerns about scoring systems are heightened, their human element is diminishing. Although software engineers initially identify the correlations and inferences programmed into algorithms, machine learning, predictive analytics, and big data promises to eliminate the human “middleman” at some point in the process.

As Hector J. Levesque, a professor at the University of Toronto and a founding member of the American Association of Artificial Intelligence, wrote:

"As a field, I believe that we tend to suffer from what might be called serial silver bulletism, defined as follows:
the tendency to believe in a silver bullet for AI, coupled with the belief that previous beliefs about silver bullets were hopelessly naıve. 
We see this in the fads and fashions of AI research over the years: first, automated theorem proving is going to solve it all; then, the methods appear too weak, and we favour expert systems; then the programs are not situated enough, and we move to behaviour-based robotics; then we come to believe that learning from big data is the answer; and on it goes."

Similarly, employment assessment companies like Knack market the benefits of science, precision and data over the past fifteen years under the guise of neural networks, artificial intelligence, big data and deep learning, yet what has changed? Employee engagement levels have hardly budged and employee turnover remains a continuing and expensive challenge for employers. Please see Gut Check: How Intelligent is Artificial Intelligence?


No comments:

Post a Comment

Because I value your thoughtful opinions, I encourage you to add a comment to this discussion. Don't be offended if I edit your comments for clarity or to keep out questionable matters, however, and I may even delete off-topic comments.