Wednesday, September 14, 2016

Need to Measure and Assess Artificial Intelligence and Decision-Support Systems

Set out below are extracts from Artificial Intelligence is Hard to See, a post by Kate Crawford and Meredith Whittaker. AI refers to artificial intelligence and boldface text is taken from the article.
AI and decision-support systems are reaching into everyday life: determining who will be on a predictive policing ‘heat list’, who will be hired or promoted, which students will be recruited to universities, or seeking to predict at birth who will become a criminal by the age of 18. So the stakes are high. 
[T]here are no agreed-upon methods to assess the human effects and longitudinal impacts of AI as it is applied across social systems. This knowledge gap is widening as the use of AI is proliferating, which heightens the risk of serious unintended consequences. 
The core issue here isn’t that AI is worse than the existing human-led processes that serve to make predictions and assign rankings. Indeed, there’s much hope that AI can be used to provide more objective assessments than humans, reducing bias and leading to better outcomes. The key concern is that AI systems are being integrated into key social institutions, even though their accuracy, and their social and economic effects, have not been rigorously studied or validated. 
AI systems are being integrated into key social institutions, even though their accuracy, and their social and economic effects, have not been rigorously studied or validated.
There needs to be a strong research field that measures and assesses the social and economic effects of current AI systems, in order to strengthen AI’s positive impacts and mitigate its risks. By measuring the impacts of these technologies, we can strengthen the design and development of AI, assist public and private actors in ensuring their systems are reliable and accountable, and reduce the possibility of errors. By building an empirical understanding of how AI functions on the ground, we can establish evidence-led models for responsible and ethical deployment, and ensure the healthy growth of the AI field. 
If the social impacts of artificial intelligence are hard to see, it is critical to find rigorous ways to make them more visible and accountable. We need new tools to allow us to know how and when automated decisions are materially affecting our lives — and, if necessary, to contest them.

Friday, July 22, 2016

EEOC, Systemic Investigations, and Assessments

The Equal Employment Opportunity Commission (EEOC) issued a review of its systemic program titled "Advancing Opportunity" in July 2016. The review marks the 10th anniversary of EEOC's 2006 Systemic Task Force Report

According to a press release accompanying release of the review:
"EEOC has transformed its systemic program in the past decade by investing in staff, training, and technology to build systemic expertise in every EEOC district," reflected EEOC Chair Jenny R. Yang. These investments have produced a 250 percent increase in systemic investigations in the past five years. 
Highlighting EEOC's significant achievements in resolving systemic cases, the review reports a 94% success rate in systemic lawsuits. In addition, EEOC tripled the amount of monetary relief recovered for victims in the past five fiscal years from 2011 through 2015, compared to the monetary relief recovered in the first five years after the Systemic Task Force Report of 2006.  EEOC also tripled the rate of successful voluntary conciliations of systemic investigations from 21% in fiscal year 2007 to 64% in fiscal year 2015.  
EEOC's Success in Systemic Litigation
EEOC's Successes in Systemic Litigation
 Regarding pre-employment assessments, the press release states:
EEOC's systemic investigations have also led to changes in hiring assessment screens that discriminated based on race, sex and disability. In a public conciliation with Target Corporation, EEOC found that four hiring assessments formerly used by the retailer were not job-related and consistent with business necessity as required by Title VII and the ADA. Target agreed to pay $2.8 million to resolve a Commissioner's charge of discrimination alleging the assessments affected thousands of applicants and agreed to ensure that future hiring screens were validated to prevent discrimination against future applicants.
In "EEOC Burnishes Systemic Successes and Intentions," Jackson Lewis, a management side labor and employment law firm, writes:
The EEOC believes that employers too often ignore its pronouncements. Therefore, the EEOC considers the best way to obtain compliance is to leverage its resources by making an example of certain employers through systemic enforcement and lawsuits. 
The EEOC defines systemic discrimination as pattern or practice, policy, or class cases where the discrimination has a broad impact on an industry, profession, company, or geographic location. 
According to the Jackson Lewis article, "The [EEOC review] provides clues to the agency’s intentions in aspirational statements and disclosures about the EEOC’s investments and nationwide teams." These include:
Tests. Like the EEOC’s challenges to background checks, the EEOC’s concern with tests and assessments is that these selection criteria have an unlawful disparate impact. The [EEOC review] lists only one recent success challenging an employer’s use of a test as a selection device. However, it makes several references to the EEOC’s interest in scrutinizing tests and assessments.
While the EEOC review only lists the public conciliation with Target Corporation noted above, as noted in a September 2014 cover story in the Wall Street Journal, there are at least two ongoing systemic investigations relating to the use of pre-employment assessments and claims under the Americans with Disabilities Act that the assessments unlawfully screen out persons with mental disabilities and that the assessments are illegal pre-employment medical examinations.
Cases By Statute

The EEOC review states:
Moving forward, EEOC will focus on three key areas in order to expand the agency's impact and better serve the public: 1) executing national strategies to address persistent and emerging systemic issues; 2) advancing solutions that promote lasting opportunity in the workplace; and 3) strengthening the agency's technology and infrastructure.
Persistent and emerging systemic issues include those listed as national priorities in the EEOC's Strategic Enforcement Plan (SEP). First on the list of national priorities in the SEP  is:
Eliminating Barriers in Recruitment and Hiring. The EEOC will target class-based intentional recruitment and hiring discrimination and facially neutral recruitment and hiring practices that adversely impact particular groups. Racial, ethnic, and religious groups, older workers, women, and people with disabilities continue to confront discriminatory policies and practices at the recruitment and hiring stages. These include exclusionary policies and practices, the channeling/steering of individuals into specific jobs due to their status in a particular group, restrictive application processes, and the use of screening tools (e.g., pre-employment tests, background checks, date-of-birth inquiries). Because of the EEOC's access to data, documents and potential evidence of discrimination in recruitment and hiring, the EEOC is better situated to address these issues than individuals or private attorneys, who have difficulties obtaining such information.
(Emphasis added) 

Wednesday, May 25, 2016

Online-Only Employment Application Processes Systematically Discriminate Against Poor


The use of online employment application processes as the sole means to apply for jobs systemically discriminates against persons of lower socioeconomic status, many of who are protected classes under equal employment opportunity (EEO) laws. The discrimination arises from the limited Internet access for many of those persons.

EEO laws prohibit discrimination against protected persons in regard to recruiting, the work environment, or any other term, condition, or privilege of employment.  Not only do the laws prohibit intentional discrimination, they also prohibit neutral policies that disproportionately affect protected persons and that are not related to the job and the needs of the business (e.g., requiring job applicants submit their application online through a web-based interface)..

The National Digital Inclusion Alliance released its list of the Top 25 Worst Connected U.S. Cities for Poor Households (households with incomes below $35,000) in September 2015. More than 50% of poor households in Birmingham, Buffalo, Chicago, Cleveland, Dallas, Detroit, Greensboro, Louisville, Memphis, Miami, New Orleans, St. Louis, Toledo, and Washington, D.C. have no Internet access. 


Top 25 Worst-Connected Cities Poor Households ACS 2014 from Angela Siefer


Detroit

Focus: HOPE's Profile PhotoNew York Times article from May 2015 notes that "Detroit has the worst rate of Internet access of any big American city, with four in 10 of its 689,000 residents lacking broadband." Many of the persons without broadband access - whether at their residences or on smartphones - rely on public libraries for Internet access

In Hope Village, a 100-block area of Detroit, half of the 5,700 residents live in poverty. Many are not getting basic digital literacy skills or access to educational resources for entry-level jobs, much less the growing number of jobs that require more tech skills and vocational certificates.
Julie Rice, a Hope Village resident for the last seven years, has found having limited web access a major obstacle in her search for full-time employment after losing her retailing management job more than two years ago. With a part-time job at a furniture store paying $10.88 an hour, Ms. Rice cannot afford a service to connect to the web, which can cost more than $70 a month. 
So Ms. Rice has made Hope Village’s public library, Parkman, her career center. She regularly comes on the five days the library is open to search retailing openings, arrange interviews and take employment tests. The library typically extends her time online over the one-hour session limit. Even so, during a recent online exam for a store manager job at Ann Taylor, she ran out of time and was locked out of the test.
Every day it becomes harder to find opportunities in Detroit without using the web. Sean Person, a Hope Village resident, has gone to stores more than a dozen times and asked to fill out paper applications, only to be told to apply online. Most listings on Michigan’s biggest private and public jobs site require email, uploads of resumes, and online assessments. 

Zappos

At Zapppos, the company has created "Zappos Insiders." According to the company:
Zappos Insiders are simply people who might want to work for Zappos someday… now, tomorrow or sometime down the road. It’s like a special membership for people who want to stay in touch with us, learn more about our fun, zany culture, know what’s happening at our company, get special insider perspectives and receive team-specific updates from the areas you’re most interested in. There is no better way to stay in-the-know and for us to get to know each other than by becoming an Insider.
What are the benefits?  
  • Be the first to know about job opportunities in your desired job family
  • Stay in-the-know about the latest news and happenings here at Zappos
  • Chat with the Zappos recruiting team during our bi-weekly Tweetchats
  • Gain exclusive access to online & in person events with current Zappos employees
People who do not have broadband access at home or on their phones are unable to obtain the benefits of being a Zappos Insider - they are the last to know about job opportunities, they are unable to stay in-the-know about the latest Zappos news, they cannot chat with Zappos recruiting teams during their bi-weekly Tweetchats, and  they are denied exclusive access to online events with current Zapppos employees.

Zappos' website has the following line:

Poke. Like. Share. Join the Conversation @InsideZappos

The only way to poke, like, share or join the conversation is with a broadband connection. Poor persons, and others with limited broadband connectivity seemingly need not apply.

As Aristotle wrote, “There is nothing so unequal as the equal treatment of unequals.” It is for this reason that employment laws like Title VII prohibit facially neutral policies (e.g.,  enrolling in Zappos Insider, necessitating broadband internet access, and using social media platforms with varying degrees of accessibility ) that disproportionately affect protected persons and that are not related to the job and the needs of the business.

Zappos' Insider may be a club that is "open" to everyone, but persons with lower socioeconomic status, disproportionately Blacks, Hispanics, persons with mental illness, and women will have a harder time being admitted and will be limited in their ability to use all the club has to offer.

See also Zappos Insider: The Death of Job Postings and the Rise of the Borg and Zappos: The Future of Hiring and Hiring Discrimination.

Knack

According to Knack, it is a game-based, science-driven, data-powered talent-matching platform.

Knack "games" like Wasabi Waiter, Dash Dash, Bomba Blitz and Meta Maze purportedly analyze every millisecond of player behavior, measuring conscientiousness, emotion recognition, and other attributes that, according to the company, academic studies show correlate with job performance. The games then score each player’s likelihood of becoming an outstanding employee.

Knack's assessments are based on games developed by the company that may be "played" on computers and mobile devices. According to the company:
Knack uses its breakthrough, scalable technology to provide disadvantaged and marginalized people around the world with an empowering gateway to social and economic mobility. Helping people tap into their true potential will increase their well-being, break persistent cycles of poverty, and make them hopeful about the future.
And yet Knack's fundamental structure - online games played by persons with broadband Internet connections over computers and mobile devices - further disadvantages those on the other side of the digital divide: those who do not have ready access to broadband, those who must rely on public libraries for limited Internet access.


Wednesday, June 17, 2015

A Fool With A Tool Is Still A Fool

In the June 22, 2015 cover story for Time magazine, "Questions to Answer in the Age of Optimized Hiring," author Eliza Gray asks, “Are we truly comfortable with turning hiring–potentially one of the most life-changing experiences that a person can go through–over to the algorithms?” The answer should be no.

When you have algorithms weighing hundreds of factors over a huge data set, you can't really know why they come to a particular decision or whether it really makes sense. As Geoff Nunberg, who teaches at the School of Information at the University of California Berkeley stated in an NPR interview, “big data is no more exact a notion than big hair.”

Decisions made or affected by correlation are inherently flawed. Correlation does not equal causation, as demonstrated by Tyler Vigen on his website Spurious Correlations. For example:
  • There is a greater than 99% correlation (0.992558) between the divorce rate in Maine and the per capita consumption of butter in the U.S. over the years 2000-2009;
  • There is a greater than 78% correlation (0.78915) between the number of worldwide non-commercial space launches and the number of sociology doctorates awarded in the U.S. over the years 1997-2009; and,
  • There is a greater than 66% correlation (0.666004) between the number of films Nicolas Cage appeared in and the number people who drowned by falling into a swimming pool over the years 1999-2009.

And what of the correlation between personality and job performance? In a 2007 article titled, “Reconsidering the Use of Personality Tests in Employment Contexts,” Dr. Neil Schmitt, the University Distinguished Professor at Michigan State University, wrote:

 [A 1965 research paper found that] the average validity of personality tests was 0.09. Twenty-five years later, Barrick and Mount (1991) published a paper in which the best validity they could get for the Big Five [personality model] was 0.13. They looked at the same research. Why are we now suddenly looking at personality as a valid predictor of job performance when the validities still haven’t changed and are still close to zero?

If personality assessments are designed to find those employees with the best fit for the company culture, shouldn't the rising use of those assessments by employers over the past 10-15 years have resulted in a concomitant rise in employee engagement?

Gallup has taken an employee engagement poll annually since 2000. Gallup defines engaged employees as those who are involved in, enthusiastic about and committed to their work and workplace. According to the 2014 Gallup poll, 51% of employees in the U.S. were "not engaged" in their jobs and 17.5% were "actively disengaged." These percentages have changed little over the fifteen years Gallup has been polling.

Gallup’s research shows that employee engagement is strongly connected to business outcomes essential to an organization’s financial success, including productivity, profitability, and customer satisfaction. Yet, the purported benefits of personality assessments have failed to move the needle on employee engagement, meaning companies have not received the promised productivity and profitability "bumps" from using personality assessments.

Laszlo Bock
There are significant risks associated with the use of personality assessments in hiring algorithms, both to the employer and to the job applicant. As Google’s Laszlo Bock state in the Time article, “if [an employer] makes a bad assessment based on an algorithm or a test, that has a major impact on a person’s life–a job they don’t get or a promotion they don’t get.”

For the employer, the risks are at least two-fold. First, people who are “different” will be screened out, denying the employer the benefits that come from having a widely diverse group of employees. As Bock states in the article:
“I imagine someone who has Asperger’s or autism, they will test differently on these things. We want people like that at the company because we want people of all kinds, but they’ll get screened out by this kind of thing.”
The second risk for employers are the liabilities they face under laws like the Americans with Disabilities Act for using personality tests that screen out persons with disabilities, whether it be Asperger’s, autism, bipolar disorder, or other mental health challenges.  The Equal Employment Opportunity Commission (EEOC) currently has two systemic investigations ongoing against employers that used personality tests in their pre-employment screening processes.
The 2014 White House report, “Big Data: Seizing Opportunities, Preserving Values," found that, "while big data can be used for great social good, it can also be used in ways that perpetrate social harms or render outcomes that have inequitable impacts, even when discrimination is not intended." An accompanying fact sheet warns:

As more decisions about our commercial and personal lives are determined by algorithms and automated processes, we must pay careful attention that big data does not systematically disadvantage certain groups, whether inadvertently or intentionally. We must prevent new modes of discrimination that some uses of big data may enable, particularly with regard to longstanding civil rights protections in housing, employment, and credit.

Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies (personality assessments and algorithmic decisionmaking) could be used to “digitally redline” unwanted groups, either as customers, employees, tenants, or recipients of credit.  That is why we should not be comfortable with turning hiring over to the algorithms.

Sunday, June 7, 2015

The LAST-2 Not the Last One

On Friday, June 5, 2015, a federal judge found that an exam for New York teaching candidates was racially discriminatory because it did not measure skills necessary to do the job. The exam, the second incarnation of the Liberal Arts and Sciences Test, called the LAST-2, was administered from 2004 through 2012 and was designed to test an applicant’s knowledge of liberal arts and science.

Establishing a Prima Facie Case

Under Title VII of the Civil Rights Act of 1964, a plaintiff can make out a prima facie case of discrimination with respect to an employment exam by showing that the exam has a disparate impact on minority candidates. To do so, a party must (1) identify a policy or practice (in this case, the employment exam), (2) demonstrate that a disparity exists, and (3) establish a causal relationship between the two. A party can meet the second and third requirement by relying on the “80% rule." As stated by the EEOC:
A selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate will generally be regarded by Federal enforcement agencies as evidence of adverse impact, while a greater than four-fifths rate will generally not be regarded by Federal enforcement agencies as evidence of adverse impact.
In the LAST-2 case, Judge Kimba M. Wood found that the pass rate for African-American and Latino candidates was between 54 percent and 75 percent of the pass rate for white candidates.

At the signing of the Civil Rights Act of 1964

Rebutting the Prima Facie Case

The defendant can rebut that prima facie showing by demonstrating that the exam is job related. To do so, the defendant must prove that the exam has been validated properly. Validation requires showing, by professionally acceptable methods, that the exam is predictive of or significantly correlated with important elements of work behavior which comprise or are relevant to the job for which candidates are being evaluated.

In determining whether an employment exam has been properly validated and is thus job related for the purposes of Title VII, the following factors must be considered:

  1. the test-makers must have conducted a suitable job analysis;
  2. the test-makers must have used reasonable competence in constructing the test;
  3. the content of the test must be related to the content of the job;
  4. the content of the test must be representative of the content of the job; and
  5. there must be a scoring system that usefully selects those applicants who can better perform the job.

The LAST-2 decision found that the defendant New York City Board of Education (BOE) failed to rebut the prima facie showing of discrimination because it had not demonstrated that the LAST-2 was properly validated.  The court found that National Evaluation Systems (NES), the test developer owned by Pearson, did not comport with the five factors listed above, focusing primarily on the first factor: the sufficiency of NES’s job analysis.


Wholly Deficient Job Analysis

A job analysis is an assessment of the important work behavior(s) required for successful performance of the job in question and the relative importance of these behaviors. The purpose of a job analysis is to ensure that an exam adequately tests for the knowledge, skills, and abilities (KSAs) that are actually needed to perform the daily tasks of the job. The test developer must be able to explain the relationship between the subject matter being assessed by the exam and the job tasks identified.

To perform a suitable job analysis, a test developer must: (1) identify the tasks involved in performing the job; (2) include a thorough survey of the relative importance of the various skills involved in the job in question; and (3) define the degree of competency required in regard to each skill.

The LAST-2 court found that the core flaw in NES’s job analysis was that it failed to identify any job tasks whatsoever. Without identifying the tasks involved in performing the job (required by the first factor discussed above), it was not possible for NES to determine the relative importance of each job task (second factor), or to define the degree of competency required for each skill needed to accomplish those job tasks (third factor). Accordingly, the court found NES’s job analysis to be wholly deficient.

An Inherently Flawed Approach

Instead of beginning with ascertaining the job tasks of New York teachers, the LAST-2 examination began with the premise that all New York teachers should be required to demonstrate an understanding of the liberal arts.

NES began developing the LAST-2 by consulting documents describing liberal arts and general education undergraduate and graduate course requirements, syllabi, and course outlines. NES then defined the KSAs it believed a liberal arts exam should assess, based on the way the liberal arts were characterized in those documents. Thus, NES did not investigate the job tasks that a teacher must perform to do her job satisfactorily, but instead used liberal arts curricular documents to construct the entirety of the LAST-2.

In other words, NES started with the unproved assumption that specific facets of liberal arts and science knowledge were critically important to the role of teaching, and then attempted to determine how to test for that specific knowledge. This is an inherently flawed approach because at no point did NES ascertain, through an open ended investigation into the job tasks a successful teacher performs, whether its conception of the liberal arts and sciences was important to even some New York public school teachers, let alone to all of them.

Survey Says ... Unpersuasive

NES argued that it had surveyed several hundred teachers about the importance of the KSAs that NES identified, and those teachers affirmed their importance, but the court found the argument unpersuasive. 

The problem with NES’s approach is that it assumed, without investigation or proof, that specific KSAs are important to a teacher’s effectiveness at her job—namely, an understanding of some pre-determined subset of the liberal arts and sciences—and then asked teachers to rank only those KSAs in importance. The fact that survey respondents stated that certain surveyed KSAs were important to teaching says nothing about the relative importance of the surveyed KSAs compared to any KSA not included in NES’s survey.

The court found that NES cannot determine the KSAs most important to teaching by surveying only those KSAs NES already believed were important. NES should have determined which KSAs to survey based on an investigation of the job tasks performed by successful teachers. Only KSAs which NES has directly linked to those identified job tasks should be included in a survey attempting to determine “relative importance.”

As an example, the court wrote:
Assume that the KSA of reading comprehension has an importance value of 9, the KSA of logical reasoning has an importance value of 4, and the KSA of leadership has an importance value of 20. Assume that NES’s survey would have queried the value of both reading comprehension and logical reasoning, but not of leadership. Ranked relative to each other, reading comprehension would be very important, while logical reasoning might be somewhat important. But in this example, neither is nearly as important as leadership. In this way, NES’s survey would have greatly exaggerated the importance of both reading comprehension and logical reasoning.
Although the survey might be an appropriate way of confirming information gathered through a proper job task investigation, or as a way of determining the relative importance of already-ascertained job tasks, it is not an appropriate way of initially identifying KSAs.

What To Do Now?

Judge Wood stated that NES should begin by first identifying the necessary job tasks for a New York public school teacher. Necessary job tasks could be identified through some combination of (1) teacher interviews, (2) observations of teachers across the state performing their day-to-day duties, and (3) the survey responses of educators who have been given open-ended surveys requiring them to describe the job tasks they perform and to rank the importance of those tasks.

Job tasks must be ascertained from the source—in this case, from public school teachers.
Using the data culled from such an investigation, NES could then analyze these job tasks,
and from that analysis determine what KSAs a teacher must possess to adequately perform the
tasks identified. NES should document precisely how those KSAs are necessary to the performance of the identified job tasks. It is those KSAs that should provide the foundation for the development of the test framework.

The importance of identifying these job tasks is amplified here because every teacher in
New York must be licensed, whether she teaches kindergarten, or advanced chemistry. NES therefore needs to determine exactly what job tasks are performed, and accordingly, what KSAs are required, to teach kindergarten through twelfth grade proficiently. This is likely a daunting task given how different the daily experience of a kindergarten teacher is from that of an advanced chemistry teacher.

Last, NES needs to make sure that the relevant test tests for abilities not already tested for by related exams. In the LAST-2 case, applicants were also required to pass Assessment of Teaching Skills – Written and a Content Specialty Test applicable to the teacher’s subject area before they can become licensed.


Thursday, October 9, 2014

Big Data's Disparate Impact - Excerpts and Annotations

This posting is based on, and excerpts are taken from, "Big Data's Disparate Impact" by Solon Barocas and Andrew D. Selbst. Their article addresses the potential for disparate impact in the data mining process and points to different places within the process where a disproportionately adverse impact on protected classes may result from innocent choices on the part the data miner. Excerpts from the article are set out below in normal typeface. Please note that footnotes from the article are not included in the excerpts set out below. Annotations that further illuminate issues raised in the article are indented and italicized. Readers are strongly encouraged to read the article by Messrs Barocas and Selbst.

* * * * * * *

"Big Data's Disparate Impact" introduces the computer science literature on data mining and proceeds through the various steps of solving a problem this way:
  • defining the target variable,
  • labeling and collecting the training data,
  • feature selection, and 
  • making decisions on the basis of the resulting model. 
Each of these steps creates possibilities for a final result that has a disproportionately adverse impact on protected classes, whether by specifying the problem to be solved in ways that affect classes differently,  failing  to  recognize  or  address  statistical  biases, reproducing past prejudice, or considering an insufficiently rich set of factors. Even in situations where data miners are extremely careful, they can still effect discriminatory results with models that, quite unintentionally, pick out proxy variables for protected classes.

To be sure, data mining is a very useful construct. It even has the potential to be a boon to those who would not discriminate, by formalizing decision-making processes and thus limiting the influence of individual bias.
Data mining in such an instance addresses the issue of the "rogue recruiter," a recruiter who is biased, whether intentionally or not, against certain protected classes. Employers and testing companies argue that replacing the rogue recruiter with an algorithmic-based decision model will eliminate the biased hiring practices of that recruiter.
But where data mining does perpetuate discrimination, society does not have a ready answer for what to do about it.
The simple fact that hiring decisions are made "by computers" does not mean the decisions are not subject to bias. Human judgment is subject to an automation bias, which fosters a tendency to disregard or not search for contradictory information insight of a computer-generated solution that is accepted as correct. Such bias has been found to be most pronounced when computer technology fails to flag a problem.
 The use of technology systems to hardwire workforce analytics raises a number of fundamental issues regarding the translation of legal mandates, psychological models and business practices into computer code and the resulting distortions. These translation distortions arise from the organizational and social context in which translation occurs; choices embody biases that exist independently, and usually prior to the creation of the system. And they arise as well from the nature of the technology itself and the attempt to make human constructs amenable to computers. (Please see What Gets Lost? Risks of Translating Psychological Models and Legal Requirements to Computer Code.)
Defining the Target Variable and Class Labels

In contrast to those traditional forms of data analysis that simply return records or summary statistics in response to a specific query, data mining attempts to locate statistical relationships in a dataset. In particular, it  automates  the  process  of  discovering  useful  patterns,  revealing regularities  upon  which  subsequent  decision-making  can  rely.  The accumulated set of discovered relationships is commonly called a “model,” and these models can be employed to automate the process of classifying entities  or  activities  of  interest,  estimating  the  value  of  unobserved variables, or predicting future outcomes.

[B]y exposing so-called “machine  learning” algorithms  to  examples  of  the  cases  of interest, the algorithm “learns” which related attributes or activities can serve as potential proxies for those qualities or outcomes of interest. In the machine learning and data mining literature, these states or outcomes of interest are known as “target variables.”

The proper specification of the target variable is frequently not obvious, and it is the data miner’s task to define it. In doing so, data miners must translate some amorphous problem into a question that can be  expressed in more formal terms that computers can parse. In particular, data miners must determine how to solve the problem at hand by translating it into a question about the value of some target variable. 

This initial step requires a data miner to “understand[] the project objectives and requirements from a business perspective [and] then convert[] this knowledge into a data mining problem definition.” Through this necessarily subjective process of translation, though, data miners may unintentionally parse the problem and define the target variable in such a way that protected classes happen to be subject to systematically less favorable determinations.
Kenexa, an employment assessment company purchased by IBM in December 2012, believes that a lengthy commute raises the risk of attrition in call-center and fast-food jobs. It asks applicants for call-center and fast-food jobs to describe their commute by picking options ranging from "less than 10 minutes" to "more than 45 minutes. 
The longer the commute, the lower their recommendation score for these jobs, says Jeff Weekley, who oversees the assessments. Applicants also can be asked how long they have been at their current address and how many times they have moved. People who move more frequently "have a higher likelihood of leaving," Mr. Weekley said.
Are there any groups of people who might live farther from the work site and may move more frequently than others? Yes, lower-income persons, disproportionately women, Black, Hispanic and the mentally ill (all, protected classes). They can't afford to live where the jobs are and move more frequently because of an inability to afford housing or the loss of employment.Not only are these protected classes poorly paid, many are electronically redlined from hiring consideration. 
As a consequence of Kenexa's "insights," its clients will pass over qualified applicants solely because they live (or don't live) in certain areas. Not only does the employer do a disservice to itself and the applicant, it increases the risk of employment litigation, with its consequent costs. (Please see From What Distance is Discrimination Acceptable?)
[W]here employers turn to data mining to develop ways of improving and automating their search for good employees, they face a number of crucial choices. Like [the term] creditworthiness, the definition of a good employee is not a given. “Good” must be defined in ways that correspond to measurable outcomes: relatively higher sales, shorter production time, or longer tenure, for example.

When employers use data mining to find good employees, they are, in fact, looking for employees whose observable characteristics suggest, based on the evidence that an employer has assembled, that they would meet or exceed some monthly sales threshold, that they would perform some task in less than a certain amount of time, or that they would remain in their positions for more than a set number of weeks or months. Rather than drawing categorical distinctions along these lines, data mining could also estimate or predict the specific numerical value of sales, production time, or tenure period, enabling employers to rank rather than simply sort employees.

These may seem like eminently reasonable things for employers to want to predict, but they are, by necessity, only part of an array of possible ways of defining what “good” means. An employer may attempt to define the target variable in a more holistic way—by, for example, relying on the grades that prior employees have received in annual reviews, which are supposed to reflect an overall assessment of performance. These target variable definitions simply inherit the formalizations involved in preexisting assessment mechanisms, which in the case of human-graded performance reviews, may be far less consistent.
As previously noted, Kenexa defines a "good" employee as a function, in part, of job tenure. It then uses a number of proxies - distance from jobsite, length of time at current address, and how many times moved - to define "job tenure."  
Painting with the broad brush of distance from job site, commute time and moving frequency results in well-qualified applicants being excluded, applicants who might have ended up being among the longest tenured of employees. The Kenexa findings are generalized correlations (i.e., persons living closer to the job site tend to have longer tenure than persons living farther from the job site). The insights say nothing about any particular applicant.
The general lesson to draw from this discussion is that the definition of the target variable and its associated class labels will determine what data mining happens to find. While critics of data mining have tended to focus on inaccurate classifications (false positives and false negatives), as much—
if not more—danger resides in the definition of the class label itself and the subsequent labeling of examples from which rules are inferred. While different choices for the target variable and class labels can seem more or less reasonable, valid concerns with discrimination enter at this stage because the different choices may have a greater or lesser adverse impact on protected classes. 

Training Data

As described above, data mining learns by example. Accordingly, what a model learns depends on the examples to which it has been exposed. The data that function as examples are known as training data: quite literally the data that train the model to behave in a certain way. The character of the training data can have meaningful consequences for the lessons that data mining happens to learn. 

Discriminatory training data leads to discriminatory models.This can mean two rather different things, though:
  1. If data mining treats cases in which prejudice has played some role as valid examples from which to learn a decision-making rule, that rule may simply reproduce the prejudice involved in these earlier cases; and 
  2. If data mining draws inferences from a biased sample of the populations to which the inferences are expected to generalize, any decisions that rests on these inferences may systematically disadvantage those who are under- or over-represented in the dataset.
Labeling Examples

The unavoidably subjective labeling of examples can skew the resulting findings in such a way that any decisions taken on the basis of those findings will characterize all future cases along the same lines, even if such characterizations would seem plainly erroneous to analysts who looked more closely at the individual cases. For all their potential problems, though, the labels applied to the training data must serve as ground truth. 

The kinds of subtle mischaracterizations that happened during training will be impossible to detect when evaluating the performance of a model, because the training is taken as a given at that point. Thus, decisions taken on the basis of discoveries that rest on haphazardly labeled data or data labeled in a systematically, though unintentionally, biased manner will seem valid. 

So long as prior decisions affected by some form of prejudice serve as  examples  of  correctly rendered  determinations,  data  mining  will necessarily infer rules that exhibit the same prejudice. 
An employer currently subject to an EEOC investigation states it identified “a pool of existing employees” that Kronos, a third party assessment provider,  utilized to create a customized assessment for use by the employer. The employer's reliance on that employee sample is flawed because people with mental disabilities are severely underrepresented in the existing workforce:
  • According to a 2010 Kessler Foundation/NOD Survey of Employment of Americans with Disabilities conducted by Harris Interactive survey, the employment gap between people with and without disabilities has remained significant over the past 25+ years.
  • According to a 2013 report of the Senate HELP Committee, Unfinished Business:  Making Employment of People with Disabilities A National Priority, only 32% of working age people with disabilities participate in the labor force, as compared with 77% of working age people without disabilities.  For people with mental illnesses, rates are even lower.  
  • The employment rate for people with serious mental illness is less than half the 33% rate for other disability groups (Anthony, Cohen, Farkas, & Gagne, 2002). 
  • Surveys have found that only 10% - 15% of people with serious mental illness receiving community treatment are competitively employed (Henry, 1990; Lindamer et al., 2003; Pandiani & Leno, 2011; Rosenheck et al., 2006; Salkever et al., 2007).
In Albemarle Paper Company v. Moody, 422 US 405 (1975), in which an employer implemented a test on the theory that a certain verbal intelligence was called for by the increasing sophistication of the plant's operations, the Supreme Court cited the Standards of the American Psychological Association and pointed out that a test should be validated on people as similar as possible to those to whom it will be administered. The Court further stated that differential studies should be conducted on minority groups/protected classes wherever feasible.  
The use of the employer's own workforce to develop and benchmark its assessment is flawed because people with mental disabilities are severely underrepresented in the employer's workforce and the overall U.S. workforce.
Not only can data mining inherit prior prejudice through the mislabeling of examples, it can also reflect current prejudice through the ongoing behavior of users taken as inputs to data mining. 
This is what Latanya Sweeney discovered in a study that found that Google queries for black-sounding names were more likely to return contextual (i.e., key-word triggered) advertisements for arrest records than those for white-sounding names. 

Sweeney confirmed that the companies paying for these ads had not set out to focus on black-sounding names; rather, the fact that black-sounding names were more likely to trigger such advertisements seemed to be an artifact of the algorithmic process that Google employs to determine 
which advertisements to deliver alongside the results for certain queries. Although the details of the process by which Google computes the so-called “quality score” according to which it ranks advertisers’ bids is not fully known, one important factor is the predicted likelihood, based on historical trends, that users will click on an advertisement. 

As Sweeney points out, the process “learns over time which ad text gets the most clicks from "viewers  of  the  ad”  and  promotes  that  advertisement  in  its  rankings accordingly. Sweeney posits that this aspect of the process could result in the differential delivery of advertisements that reflect the kinds of prejudice held by those exposed to the advertisements. In attempting to cater to the preferences of users, Google will unintentionally reproduce the existing prejudices that inform users’ choices. 

A  similar  situation  could  conceivably  arise  on  websites  that recommend potential employees to employers, as LinkedIn does through its Talent  Match  feature.  If  LinkedIn  determines  which candidates  to recommend on the basis of the demonstrated interest of employers in certain types of candidates, Talent Match will offer recommendations that reflect whatever biases employers happen to exhibit. In particular, if LinkedIn’s algorithm observes that employers disfavor certain candidates that are members of a protected class, Talent Match may decrease the rate at which it recommends  these types  of  candidates  to  employers.  The recommendation engine would learn to cater to the prejudicial preferences of employers. 

Data Collection

Organizations that do not or cannot observe different populations in a consistent way and with 
equal coverage will amass evidence that fails to reflect the actual incidence and relative proportion of some attribute or activity in the under- or over-observed group. Consequently, decisions that depend on conclusions drawn from this data may discriminate against members of these groups. 

The data might suffer from a variety of problems: the individual records that a company maintains about a person might have serious mistakes, the records of the entire protected class of which this person is a member might also have similar mistakes at a higher rate than other groups, and the entire set of records may fail to reflect members of protected classes in  accurate  proportion  to  others.   In  other  words,  the  quality  and representativeness of records might vary in ways that correlate with class membership (e.g., institutions might maintain systematically less accurate, precise, timely, and complete records). Even a dataset with individual records of consistently high quality can suffer from statistical biases that fail to represent different groups in accurate proportions. Much attention has 
focused on the harms that might befall individuals whose records in various commercial databases are error-ridden, but far less consideration has been paid to the systematic disadvantage that members of protected classes may suffer from being miscounted and the resulting biases in their representation 
in the evidence base. 

Recent scholarship has begun to stress this point. Jonas Lerman, for example, worries about “the nonrandom, systemic omission of people who live on big data’s margins, whether due to poverty, geography, or lifestyle, and whose lives are less ‘datafied’ than the general population’s.” Kate Crawford has likewise warned, “because not all data is created or even collected equally, there are ‘signal problems’ in big-data sets—dark zones or shadows where some citizens and communities are ... underrepresented.” Errors  of  this  sort  may  befall  historically disadvantaged groups at higher rates because they are less involved in the formal economy and its data-generating activities.

Crawford points to Street Bump, an application for Boston residents that takes advantage of accelerometers built into smart phones to detect when drivers ride over potholes (sudden movement that suggests broken road automatically prompts the phone to report the location to the city). 
While Crawford praises the cleverness and cost-effectiveness of this passive approach to reporting road problems, she rightly warns that whatever information the city receives from this application will be biased by the uneven distribution of smartphones across populations in different parts of 
the city. In particular, systematic differences in smartphone ownership will very likely result in the underreporting of road problems in the poorer communities where protected groups disproportionately congregate. If the city were to rely on this data to determine where it should direct its resources, it would only further underserve these communities. Indeed, the city would discriminate against those who lack the capacity to report problems as effectively as wealthier residents with smartphones.

A similar dynamic could easily apply in an employment context if members of protected classes are unable to report their interest in and qualification for jobs listed online as easily or effectively as others due to systematic differences in Internet access. 
Zappos has launched a new careers site and removed all job postings. Instead of applying for jobs, persons interested in working at Zappos will need to enroll in a social network run by the company, called Zappos Insiders. The social network will allow them to network with current employees by digital Q&As, contests and other means in hopes that Zappos will tap them when jobs come open.
"Zappos Insiders will have unique access to content, Google Hangouts, and discussions with recruiters and hiring teams. Since the call-to-action is to become an Insider versus applying for a specific opening, we will capture more people with a variety of skill sets that we can pipeline for current or future openings," said Michael Bailen, Zappos’ head of talent acquisition.
In response to a question, “How can I stand out from the pack and stay front-and-center in the Zappos Recruiters’ minds?” on the Zappos' Insider site, the company lists six ways to stand out, including: using Twitter, Facebook, Instagram, Pinterest and Google Hangouts; participating in TweetChats; following Zappos’ employees on various social media platforms; and, reaching out to Zappos’  “team ambassadors.” 
For the most part, all of the foregoing activities require broadband internet access and devices (tablets, smartphones, etc.) that run on those access networks.  A number of protected classes will be challenged by both the broadband access and social media participation requirements:
  • As noted  by a PewResearch Internet Project Research report, African Americans have long been less likely than whites to have high speed broadband access at home, and that continues to be the case. Today, African Americans trail whites by seven percentage points when it comes to overall internet use (87% of whites and 80% of blacks are internet users), and by twelve percentage points when it comes to home broadband adoption (74% of whites and 62% of blacks have some sort of broadband connection at home).  
  • The gap between whites and blacks when it comes to traditional measures of internet and broadband adoption is pronounced. Specifically, older African Americans, as well as those who have not attended college, are significantly less likely to go online or to have broadband service at home compared to whites with a similar demographic profile.
  • According to the PewResearch Internet Project, even among those persons who have broadband access, the percentage of those using social media sites varies significantly by age.
Social medial participation is not solely a function of age. "Social media is transforming how we engage with customers, employees, jobseekers and other stakeholders," said Kathy Martinez, Assistant Secretary of Labor for Disability Employment Policy. "But when social media is inaccessible to people with disabilities, it excludes a sizeable segment of our population." 
Persons with disabilities (e.g., sight or hearing loss, paralysis), whether physical, mental, or developmental, face challenges accessing social media. Each of the social media platforms promoted by Zappos - Twitter, Facebook, Instagram, Pinterest, and Google Hangouts - have differing levels of support for those with disabilities (e.g., close captions or real live captions on image content that utilize sound/voice). (Please see Zappos: The Future of Hiring and Hiring Discrimination?)
To ensure that data mining reveals patterns that obtain for more than the particular sample under
analysis, the sample must share the same probability distribution as the data that would be gathered from all cases across both time and population. In other words, the sample must be proportionally representative of the entire population, even though the sample, by definition, does not include every
case.

If a sample includes a disproportionate representation of a particular class (more or less than its actual incidence in the overall population), the results of an analysis of that sample may skew in favor or against the over-or under-represented class. While the representativeness of the data is often simply assumed, this assumption is rarely justified, and is “perhaps more often incorrect than correct.”

Feature Selection

Organizations—and the data miners that work for them—also make choices about what attributes they observe and what they subsequently fold into their analyses. Data miners refer to the process of settling on the specific string of input variables as “feature selection.” Members of protected classes may find that they are subject to systematically less accurate classifications or predictions because the details necessary to achieve equally accurate determinations reside at a level of granularity and coverage that the features fail to achieve. 

This problem stems from the fact that data are by necessity reductive representations  of  an  infinitely  more  specific  real-world  object  or phenomenon. At issue, really, is the coarseness and comprehensiveness of the criteria that permit statistical discrimination and the uneven rates at which different  groups  happen  to  be  subject  to  erroneous  determinations. Crucially, these erroneous and potentially adverse outcomes are artifacts of statistical reasoning rather than prejudice on the part of decision-makers or bias in the composition of the dataset. As Frederick Schauer explains, decision-makers  that  rely  on  statistically  sound  but  nonuniversal generalizations “are being simultaneously rational and unfair” because certain  individuals  are “actuarially saddled” by  statistically  sound inferences that are nevertheless inaccurate

To  take  an  obvious  example,  hiring  decisions  that  consider credentials tend to assign enormous weight to the reputation of the college or university from which an applicant has graduated, despite the fact that such credentials may communicate very little about the applicant’s job-related  skills and competencies. If  equally  competent  members  of protected classes happen to graduate from these colleges or universities at disproportionately low rates, decisions that turn on the credentials conferred by these schools, rather than some set of more specific qualities that more accurately sort individuals, will incorrectly and systematically discount these individuals.
Kenexa, an assessment company owned by IBM and used by hundreds of employers, believes that a lengthy commute raises the risk of attrition in call-center and fast-food jobs. It asks applicants for those jobs to describe their commute by picking options ranging from "less than 10 minutes" to "more than 45 minutes."  According to Kenexa’s Jeff Weekley, in a September 20, 2012 article in The Wall Street Journal, “The longer the commute, the lower their recommendation score for these jobs.” Applicants are also asked how long they have been at their current address and how many times they have moved. People who move more frequently "have a higher likelihood of leaving," Mr. Weekley said. 
A 2011 study by the Center for Public Housing found that poor and near-poor families tend to move more frequently than the general population. A wide range of often complex forces appears to drive this mobility: 
  • the formation and dissolution of households; 
  • an inability to afford one’s housing costs; 
  • the loss of employment; 
  • lack of quality housing; or
  • a safer neighborhood.
 According to the U.S. Census, lower-income persons are disproportionately female, black, Hispanic, and mentally ill.
Painting with the broad brush of distance from work, commute time and moving frequency results in well-qualified applicants being excluded from employment consideration. Importantly, the workforce insights of companies like Kenexa are based on data correlations - they say nothing about a particular person. 
The application of these insights means that many low-income persons are electronically redlined. Employers do not even interview, let alone hire, qualified applicants because they live in certain areas or because they have moved. The reasons for moving do not matter, even if it was to find a better school for their children, to escape domestic violence or due to job loss from a plant shutdown.  
When Clayton County, Georgia killed its bus system in 2010, it had nearly 9,000 daily riders. Many of those riders used the service to commute to their jobs. The transit shutdown increased commuting times (as persons found alternate ways to get to work) and led to more housing mobility (as persons relocated to be closer to their jobs to mitigate commuting time).  Though no fault of their own, the impact of increasing the former bus riders commuting time or moving their residence made them less attractive job candidates to the many employers who use companies like Kenexa. 
Making Decisions on the Basis of the Resulting Model

Cases  of  decision-making  that  do  not  artificially  introduce discriminatory effects into the data mining process may nevertheless result in systematically less favorable determinations for members of protected classes. Situations of this sort are possible when the criteria that are genuinely relevant in making rational and well-informed decisions also happen to serve as reliable proxies for class membership. In other words, the very same criteria that correctly sort individuals according to their
predicted likelihood of excelling at a job—as formalized in some fashion— may also sort individuals according to class membership.

For example, employers may find, in conferring greater attention and opportunities to employees that they predict will prove most competent at some task, that they subject members of protected groups to consistently disadvantageous  treatment  because  the  criteria  that  determine  the attractiveness of employees happen to be held at systematically lower rates by members of these groups. Decision-makers do not necessarily intend this disparate impact because they hold prejudicial beliefs; rather, their reasonable  priorities  as  profit-seekers  unintentionally  recapitulate  the inequality that happens to exist in society. Furthermore, this may occur even if proscribed criteria have been removed from the dataset, the data are free from latent prejudice or bias, the data is especially granular and diverse, and the only goal is to maximize classificatory or predictive accuracy. 

The problem stems from what researchers call “redundant encodings”: cases in which membership in a protected class happens to be encoded in other data. This occurs when a particular piece of data or certain values for that piece of data are highly correlated with membership in specific protected classes. The fact that these data may hold significant statistical relevance to the decision at hand explains why data mining can result in seemingly discriminatory models even when its only objective is to ensure the greatest possible accuracy for its determinations. If there is a disparate distribution of an attribute, a more precise form of data mining will be more likely to capture it as such. Better data and more features will simply expose the exact extent of inequality. 

Data mining could also breathe new life into traditional forms of intentional discrimination because decision-makers with prejudicial views can mask their intentions by exploiting each of the mechanisms enumerated above.  Stated  simply,  any  form  of  discrimination  that  happens unintentionally can be orchestrated intentionally as well.

For instance, decision-makers could knowingly and purposefully bias the collection of data to ensure that mining suggests rules that are less favorable to members of protected classes. They could likewise attempt to preserve the known effects of prejudice in prior decision-making by insisting that such decisions constitute a reliable and impartial set of examples from which to induce a decision-making rule. And decision-makers could intentionally rely on features that only permit coarse-grain distinction-making—distinctions that result  in  avoidable  and  higher  rates  of  erroneous  determinations  for members of a protected class.

Because data mining holds the potential to infer otherwise unseen attributes, including those traditionally deemed sensitive, it can furnish methods by which to determine indirectly individuals’ membership in protected classes and to unduly discount, penalize, or exclude such people accordingly. In other words, data mining could grant decision-makers the ability  to  distinguish  and disadvantage  members  of  protected  classes without access to explicit information about individuals’ class membership. It could instead help to pinpoint reliable proxies for such membership and thus place institutions in the position to automatically sort individuals into their respective class without ever having to learn these facts directly.





Tuesday, August 26, 2014

Knack Testing Illegal Under ADA?

Wasabi Waiter looks a lot like hundreds of other simple online games. Players acting as sushi servers track the moods of their customers, deliver them dishes that correspond to those emotions, and clear plates while tending to incoming patrons. Unlike most games, though, Wasabi Waiter purportedly analyzes every millisecond of player behavior, measuring conscientiousness, emotion recognition, and other attributes that academic studies show correlate with job performance. The game, designed by startup Knack.it, then scores each player’s likelihood of becoming an outstanding employee.

Knack's assessments are based on games developed by the company that may be "played" on computers and mobile devices. Interesting, but how do persons with disabilities play these games? How would a blind person play these game? How would a persons with limb paralysis play these games? How would a person with diminished mental capacity play these games? How well would a person who may not be computer literate, an older person for example, play these games? What advantage, if any, does a gaming environment provide for one class of persons (young male online gamer ) versus another (mature female non-gamer)?

Screening Out Applicants

Tests that screen out or tend to screen out an individual with a disability or a class of individuals with disabilities are illegal under the Americans with Disabilities Act (ADA) unless the tests are job-related and consistent with business necessity.

Knack testing relies on gamification. Applicants "play" Wasabi Waiter, Balloon Brigade, and other video games to generate the data used by Knack to identify promising applicants. As noted above, however, the reliance on video games screens out persons with disabilities, whether physical disabilities like blindness and limb paralysis or mental disabilities like diminished mental capacity.

Phrased differently, how would physicist Stephen Hawking, clearly an innovator and high performer, fare in taking Knack's Balloon Brigade? Hawking has a motor neurone disease related to amyotrophic lateral sclerosis, a condition that has progressed over the years. He is almost entirely paralysed and communicates through a speech generating device.

From a practical standpoint, legal claims that an individual with a disability has been screened out do not require a statistical showing of disparate impact, or other comparative evidence showing that a group of disabled persons are adversely affected. The plain language of the law – “screen out or tend to screen out” and “an individual with a disability or a class of individuals with disabilities” – confirm that a claim may be supported by evidence that the challenged practice screens out an individual on the basis of their disability.  “In the ADA context, a plaintiff may satisfy the second prong of his prima facie case [impact upon persons with protected characteristic] by demonstrating an adverse impact on himself rather than on an entire group.” Gonzalez v. City of New Braunfels.

Illegal Medical Examination

The ADA prohibits employers, whether directly or via third parties like Knack, from administering pre-employment medical examinations. Guidance by the Equal Employment Opportunity Commission defines medical examination under the ADA by reference to seven factors, any one of which may be sufficient to determine that a test is a medical examination.

Physiological Responses

One of those factors is whether the test measures an applicant's physiological responses to performing a task. EEOC guidance on this issue states:
[I]f an employer measures an applicant's physiological or biological responses to performance, the test would be medical.
According to Knack, its test:
leverages cutting-edge behavioral and cognitive neuroscience, data science, and computer science to build games which produce thousands of data points describing how a player perceives, responds, plans, reacts, thinks, problem-solves, adapts, learns, persists, and performs in a multitude of situations.
Types of physiological responses include a reaction or response - a bodily process occurring due to the effect of some antecedent stimulus or agent. As noted in the prior paragraph, Knack tests create data points that track how an applicant perceives, responds, reacts, adapts, learns and persists. The Knack test, therefore, is an illegal medical examination under the ADA.

Five Factor Model of Personality

Justin Fox, executive editor of the Harvard Business Review Group, took two of the Knack assessments and received information in the following report:


As can be seen by the report, among the factors measured by Knack are conscientiousness, openness and stability. These are elements found in the Five Factor Model of Personality, a model that is currently being challenged in at least seven charges filed with the EEOC. Please see ADA, FFM and DSM.

The ADA prohibits pre-employment medical exams but allows employers to “make pre-employment inquiries into the ability of an applicant to perform job-related functions.” The Knack gaming measurements do not seek job-related information and are not consistent with business necessity. The measurements, designed to reveal information about individuals’ openness, conscientiousness, stability (also referred to as neuroticism), and other factors do not seek information about the ability of an applicant to perform the day-to-day functions of a job.

Knowledge of Disability Not Required

Neither the medical examination claim nor the "screen out" claim under the ADA require that an employer have knowledge that an applicant has a disability, a consistent holding from a number of jurisdictions, including the 7th9th10th, and 11th Federal Circuit Courts of Appeal.

ADA guidance states, in relevant part:
A covered entity shall not require a medical examination and shall not make inquiries of an employee as to whether such employee is an individual with a disability or as to the nature and severity of the disability, unless such examination or inquiry is shown to be job-related and consistent with business necessity.
According to guidance issued by the EEOC, "This statutory language makes clear that the ADA’s restrictions on inquiries and examinations apply to all employees, not just those with disabilities.”