Friday, November 29, 2013

Employment Testing: Hot Button Issue for EEOC and OFCCP

On October 30, 2013, the U.S. Department of Labor announced that federal construction contractor M.C. Dean Inc. had settled allegations that it failed to provide equal employment opportunity to 381 African American, Hispanic and Asian American workers who applied for jobs at the company's Dulles headquarters. A review by the department's Office of Federal Contract Compliance Programs determined that the contractor used a set of selection procedures, including invalid tests, which unfairly kept qualified minority candidates from securing jobs as apprentices and electricians.
"Our nation was built on the principles of fair play and equal opportunity, and artificial barriers that keep workers from securing good jobs violate those principles," said OFCCP Director Patricia A. Shiu. "I am pleased that this settlement will provide remedies to the affected workers and that M.C. Dean has agreed to invest significant resources to improve its hiring practices so that this never happens again."
Under the terms of the agreement, M.C. Dean will pay $875,000 in back wages and interest to 272 African American, 98 Hispanic and 11 Asian American job applicants who were denied employment in 2010. The contractor will also extend 39 job offers to the class members as opportunities become available. Additionally, M.C. Dean has agreed to undertake extensive self-monitoring measures and personnel training to ensure that all of its employment practices fully comply with Executive Order 11246, which prohibits federal contractors and subcontractors from discriminating in employment on the bases of race, color and national origin.
This settlement provides (at least) two lessons to all federal contractors.  First, the OFCCP is digging deeper than just the overall applicant-to-hire adverse impact analyses.  Where there is overall applicant-to-hire adverse impact in the hiring process, the Agency will analyze each stage (screen, test, interview, offer, etc.) in the hiring processes for adverse impact.  Second, where there is adverse impact at the testing stage, employers must evaluate the validity of their “tests.”  In these cases, OFCCP will request and send the validation materials to its Industrial-Organization Psychologist for review, so it must be able to withstand scrutiny, including whether the test has been (i) validated recently, (ii) validated for the employer’s specific position, and (iii) that there are not less discriminatory methods for achieving the same predictive results of job performance.   
In particular, employers who are using employment tests that have never been validated, have not been validated for the specific position for which they are being used, have not been validated for their specific company, have not been reviewed by someone other than the testing vendor who created the test, or have not been revalidated as the position changed over time may not realize they may be “at risk” in these audits. 
In short, own each step of your hiring process – even if a third-party testing vendor created and/or administers your test, the employer will be held accountable if the test causes adverse impact and is not properly validated.  Employers need to get in front of these testing issues by analyzing the test’s potential adverse impact and existing validation to minimize exposure during audits.  Notably, this has also become a “hot button” for EEOC, so taking a close look at your tests can help minimize exposure to both OFCCP and EEOC claims.

Wednesday, November 27, 2013

On Not Dying Young: Fatal Illness or Flawed Algorithm?

Lukas F. Hartman, in a November 26, 2013 posting titled "Why 23andMe has the FDA worried: It wrongly told me I might die young," demonstrates the need for skepticism and oversight of many algorithmic-based decision models.


23andMe is one of many companies to offer at-home genetic testing; in September it reported that its database had reached 400,000 people. Scientists have raised questions about the accuracy of the tests, and in May 2011 a Dutch study claimed the tests were inaccurate and offered little to no benefit to consumers. 23andMe’s $99 Saliva Collection Kit and Personal Genome Service (PGS) claims to test saliva, to provide data that shows users how their genetics may impact their health and explores their personal ancestry. The company is backed by Google.

The US Food and Drug Administration (FDA) recently ordered 23andMe to “immediately discontinue” the marketing of a genetic screening service, after the company failed to send the agency information that supports its marketing claims. “FDA is concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work,” Gutierrez wrote in the letter, which was dated 22 November 2013 and addressed to 23andMe co-founder Anne Wojcicki.

An Unwelcome Surprise

Mr. Hartman signed up for 23andme in November 2010. He sent them his saliva and received a web login to his genome in return. 


23andMe extract a sort of gene soup from a person's saliva and pour it on a DNA microarray chip made by a company called Illumina. These chips are covered with thousands of little testing probes. A probe is made up of a lump of molecules to which the matching pieces of my DNA naturally attach. These molecules are designed so that they light up when a match occurs. Hundreds of thousands of chemical tests run in parallel on the chip. The result is an image that is scanned by a computer and compared to a database of so called SNPs, “snips.” According to Wikipedia, these “single nucleotide polymorphisms” make up about 90% of all genetic variation in the human genome. So when 23andMe detects a SNP variation in a person's genome it means that in a base pair of that person's DNA there is a difference from the so-called “reference genome.” 

To sum it up, 23andMe compares hundreds of thousands of scanned SNPs to its database which is constantly updated in response to new scientific studies and sources. The website then shows you nicely designed, ready to ingest interpretations of your genetic variations manifesting in health risks. Every time they have new updates for “Health Risks” or “Inherited Conditions,” you’ll receive an email.

Everything went well for a long time. There were no special surprises. But some weeks ago there was, suddenly, an unnerving update in Mr. Hartman's inherited conditions report. He clicked the link and a warning appeared. You have to specifically agree if you want to know the result of potentially unnerving, life changing results. He clicked OK and was forwarded to the result. It said:
Has two mutations linked to limb-girdle muscular dystrophy. A person with two of these mutations typically has limb-girdle muscular dystrophy.
Mr. Hartman let that sink in for a moment. He had never heard of this illness before. Some people with limb-girdle muscular dystrophy lose the ability to walk and suffer from serious disability,” said the page, showing Mr. Hartman an image of a smiling physical therapist treating a smiling patient. What 23andMe didn’t spell out—but Wikipedia did—was that LGMD potentially ends with death. 

Coding Error or Genetic Condition?

Mr. Hartman downloaded my 23andMe data and poked at it with a text editor. He read cryptic articles about genetic engineering and installed a genome analysis tool, “Promethease,” which can import, amongst other formats, 23andMe raw data; but in contrast to 23andMe it tells you even the very unnerving stuff. Someone had found a bug in Mr. Hartman and he tried to reproduce it.

Technically speaking, 23andMe detected two SNP variations in Mr. Hartman's genome called rs28933693 and rs28937900. So he attempted finding out more about these mutations. When you look up “rs28933693” in SNPedia, a kind of Wikipedia for SNPs, you’ll find a link to an entry in OMIM (Online Mendelian Inheritance in Man). The entry features medical study excerpts concerning some LGMD patients that all had the same so called homozygous mutation in a certain gene location.

To understand the meaning of this you have to recall that humans are diploid organisms: We have two copies of each chromosome, one inherited from the mother, another from the father. A heterozygous mutation only affects one of the two copies, a homozygous mutation means that the same location of both copies differs in the same way.

Diploid is a good thing; it means that we potentially have a backup of every critical function of our body. So if a piece of my DNA encodes a critical enzyme and this code is “broken” on one of the chromosome copies, it could well be intact on the other. If you’re out of luck and both of your parents are “carriers” of exactly the same mutation, the inherited condition may manifest in you. This was the case with the LGMD patients mentioned in the study Mr. Hartman stumbled upon. Both of their copies of the respective chromosome region are mutated in the same (homozygous) way, which triggers the muscular dystrophy. This very rarely happens, but it happens.

After researching tensely for some hours, Mr. Hartman looked closer into the data that 23andMe provided as a download. Yes, he really had two mutations. But they weren’t on the same gene, but on two different genes. By rare chance, both of these mutations are statistically linked to LGMD, but to two different versions of LGMD. So he didn’t have a homozygous mutation, but two unrelated heterozygous ones. The web programmers at 23andme had added those two mutations together into one homozygous mutation in their code. And so the algorithm switched to red alert.

Mr. Hartman sent a support request to 23andMe including his research and conclusions (this would be called a “bug report” in software engineering). After a few days of waiting, 23andMe confirmed the bug and apologized. So the bug was not inside of of Mr. Hartman, but in the algorithm. An algorithm can be fixed easily, unlike someone's genetic code.

False Positives, False Negatives and the Risks of Automation Bias

Human judgment is subject to an automation bias which, as discussed in a 2010 law review articlefosters a tendency to disregard or not search for contradictory information insight of a computer-generated solution that is accepted as correct. Such bias has been found to be most pronounced when computer technology fails to flag a problem.

In a study from the medical context, researchers compared the diagnostic accuracy of two groups of experienced mammogram readers (radiologists, radiographers, and breast clinicians)—one aided by a Computer Aided Detection (CAD) program and the other lacking access to the technology. The study revealed that the first group was almost twice as likely to miss signs of cancer if the CAD did not flag the concerning presentation than the second group that did not rely on the program.

The false positive for limb-girdle muscular dystrophy 23andMe emailed Mr. Hartman is clearly problematic, but the risk of a false negative when combined with automation bias is potentially catastrophic. 

BRCA1 Gene
Consider, for illustrative purposes, coding errors resulting in false negatives for the breast cancer 1, early onset (BRCA1) gene. BRCA1 is part of a complex that repairs double-strand breaks in DNA. The strands of the DNA double helix are continuously breaking from damage. Sometimes one strand is broken, and sometimes both strands are broken simultaneously. BRCA1 is part of a protein complex that repairs DNA when both strands are broken.  

Researchers have identified more than 1,000 mutations in the BRCA1 gene, many of which are associated with an increased risk of cancer. Researchers believe that the defective BRCA1 protein is unable to help fix DNA damages leading to mutations in other genes. These mutations can accumulate and may allow cells to grow and divide uncontrollably to form a tumor.  

Women having inherited a defective BRCA1 gene have risks for breast and ovarian cancer that are so high and seem so selective that many woman with BRCA1 mutations choose to have prophylactic surgeryWhy? Bilateral prophylactic mastectomy has been shown to reduce the risk of breast cancer by at least 95 percent in women who have a mutation in the BRCA1 gene. A woman receiving a false negative for a BRCA1 mutation would not consider prophylactic surgery. Why should she, she has no BRCA1 mutation?

A false negative creates a false sense of security and restricts a woman's right to choose. To choose whether to have prophylactic surgery; to choose to have more intense monitoring; to choose alternative therapies; to choose life.

* * * * *

The potential and pitfalls of an increasingly algorithmic world beg the question of whether legal and policy changes are needed to regulate our changing environment. Should we regulate, or further regulate, algorithms in certain contexts? What would such regulation look like? Is it even possible? What ill effects might regulation itself cause? Given the ubiquity of algorithms, do they, in a sense, regulate us?



Monday, November 18, 2013

Do We Regulate Algorithms, or Do Algorithms Regulate Us?

The genesis for this posting comes from the following articles:
This posting includes portions of the articles and modifies them to address issues relating to big data and the use of algorithmic decisionmaking in the area of pre-employment assessments and workforce optimization.

Embedding Bias

Every step in the big data pipeline raises concerns: the privacy implications of amassing, connecting, and using personal information, the implicit and explicit biases embedded in both datasets and algorithms, and the individual and societal consequences of the resulting classifications and segmentation.

While many companies and government agencies foster an illusion that classification is (or should be) an area of absolute algorithmic rule—that decisions are neutral, organic, and even automatically rendered without human intervention—reality is a far messier mix of technical and human curating. Data isn't something that's abstract and value-neutral. Data only exists when it's collected, and collecting data is a human activity. And in turn, the act of collecting and analyzing data changes (one could even say "interprets") us. 

Both the datasets and the algorithms reflect choices, among others, about data, connections, inferences, interpretation, and thresholds for inclusion that advance a specific purpose. Like maps that represent the physical environment in varied ways to serve different needs—mountaineering, sightseeing, or shopping—classification systems are neither neutral nor objective, but are biased toward their purposes. They reflect the explicit and implicit values of their designers.Assumptions are embedded in a data model upon its creation. Data sources are shaped through ‘washing’, integration, and algorithmic calculations in order to be commensurate to an acceptable level that allows a data set to be created.

Errors are not only possible, but they are likely to occur at each stage in the process of assessment that proceeds from identification to its conclusion in a discriminatory act. Error is inherent in the nature of the processes through which reality is represented as digitally encoded data. Some of these errors will be random, but most will reflect the biases inherent in the theories, and the goals, the instruments and the institutions that govern the collections of data in the first place.

Clear Windshield or Rearview Mirror?

The decisions made by the users of sophisticated analytics determine the provision, denial, enhancement, or restriction of the opportunities that citizens and consumers face both inside and outside formal markets.


Algorithms embody a profound deference to precedent; they draw on the past to act on (and enact) the future. The apparent omniscience of big data may in truth be nothing more than misdirection. Instead of offering a clear windshield, the big data phenomenon may be more like a big rear-view mirror telling us nothing about the future.

Does this deference to precedent result in a self-reinforcing and self-perpetuating system, where individuals are forever burdened by a history that they are encouraged to repeat and from which they are unable to escape? Does deference to past patterns augment path dependence, reduce individual choice, and result in cumulative disadvantage?

Already burdened segments of the population can become further victimized through the use of sophisticated algorithms in support of the identification, classification, segmentation, and targeting of individuals as members of analytically constructed groups. In creating these groups, the algorithms rely upon generalizations that lead to viewing people as members of populations, or categories, or groups, rather than as individuals (i.e., persons who live more than X miles from a jobsite).

Shrouding Opacity In The Guise of Legitimacy

Workforce analytic systems, designed in part to mitigate risks for employers, have now become sources of material risk. The systems create the perception of stability through probabilistic reasoning and the experience of accuracy, reliability, and comprehensiveness through automation and presentation. But in so doing, technology systems draw organizational attention away from uncertainty and partiality. They embed, and then justify, self-interested assumptions and hypotheses.

Moreover, they shroud opacity—and the challenges for oversight that opacity presents—in the guise of legitimacy, providing the allure of shortcuts and safe harbors for actors both challenged by resource constraints and desperate for acceptable means to demonstrate compliance with legal mandates and market expectations.

The technical language of workforce analytic systems obscures the accountability of the decisions they channel. Programming and mathematical idiom can shield layers of embedded assumptions from high-level firm decisionmakers charged with meaningful oversight and can mask important concerns with a veneer of transparency. This problem is compounded in the case of regulators outside the firm, who frequently lack the resources or vantage to peer inside buried decision processes and must instead rely on the resulting conclusions about risks and safeguards offered them by the parties they regulate.

Do We Regulate Algorithms, or Do Algorithms Regulate Us?

Can an algorithm be agnostic? Algorithms may be rule-based mechanisms that fulfill requests, but they are also governing agents that are choosing between competing, and sometimes conflicting, data objects.

The potential and pitfalls of an increasingly algorithmic world beg the question of whether legal and policy changes are needed to regulate our changing environment. Should we regulate, or further regulate, algorithms in certain contexts? What would such regulation look like? Is it even possible? What ill effects might regulation itself cause? Given the ubiquity of algorithms, do they, in a sense, regulate us?

We regulate markets, and market behavior, out of concerns for equity, as well as out of concern for efficiency. The fact that the impacts of design flaws are inequitably distributed is at least one basis for justifying regulatory intervention.

The regulatory challenge is to find ways to internalize the many external costs generated by the rapidly expanding use of analytics. That is, to find ways to force the providers and users of discriminatory technologies to pay the full social costs of their use. Requirements to warn, or otherwise inform users and their customers about the risks associated with the use of these systems should not absolve system producers of their own responsibility for reducing or mitigating the harms. This is part of imposing economic burdens or using incentives as tools to shape behavior most efficiently and effectively.







Tuesday, November 5, 2013

Positive Trending for Claims Challenging the Legality of Pre-Employment Assessments

A variety of factors are trending in favor of eliminating the use of pre-employment assessments that violate the Americans with Disabilities Act (ADA) and the Rehabilitation Act of 1973, including:
  • Implementation of the EEOC Strategic Enforcement Plan for 2013-2016
    • The first national priority of the EEOC in the strategic enforcement plan is “eliminating systemic barriers in recruitment and hiring.”
    • “[P]eople with disabilities continue to confront discriminatory policies and practices at the recruitment and hiring stages. These include … the use of screening tools (e.g., pre-employment tests …) “
  • EEOC Systemic Investigation of Pre-Employment Testing and the ADA
    • Stemming from more than six years of litigation by the EEOC against Kroger and Kronos 
    • September 14, 2012 Third Circuit Court of Appeals decision in EEOC v. Kronos Incorporated
      • It is “a proper inquiry for the EEOC to seek information about how these tests work, including information about the types of characteristics they screen out….“ Third Circuit Court of Appeals (September 14, 2012)
    • Transfer of two charges from Atlanta EEOC to the EEOC office leading the systemic investigation

  • EEOC Focus on Disability Discrimination Litigation

      • ADA claims covered the biggest percentage of the EEOC’s yearly litigation filing activity for FY 2013 
      • The pie chart below provides a snapshot of the cases filed by the EEOC in the last week of the fiscal year and shows that almost half of the cases filed were based on disability discrimination.
    • CVS/Rhode Island ACLU Settlement
      • CVS eliminates use of pre-offer assessment as a consequence of claim by ACLU that questions from the assessment could have a discriminatory impact on people with mental impairments or disorders. 
      • Please see Challenges to Pre-Employment Assessments
    • Karraker Court Decision
      • Rejected “form” defenses (e.g., test not reviewed by medical professional) and dismantled distinction between a test that evaluates personality and one that diagnoses mental disorders
      • Please see Courts Find Tests To Be Illegal
    • Adoption of the Five-Factor Model in DSM-5 by the American Psychiatric Association
      • Based on two decades of research demonstrating that the five-factor model - used as the basis for many of the pre-employment personality tests - can be used as a structural model for describing and understanding personality disorders, including those within the Diagnostic and Statistical Manual of Mental Disorders (DSM)
      • Please see ADA, FFM and DSM
    • Significant Risk of Punitive Damages
      • In addition to claims for actual or compensatory damages, which may be nominal on a per person basis, applicants may also seek punitive damages for the reckless behavior of the employers that used illegal pre-employment assessments.
      • In State of Arizona v. ASARCO LLC, No. 11-17484 (9th Cir. Oct. 24, 2013), the 9th Circuit Court of Appeals held that a punitive damages award of $125,000 in an employment discrimination case finding no actual damages and $1 in nominal damages was constitutional and "did not raise judicial eyebrows."
      • Please see Punitive Damages
    • OFCCP issuance of non-discrimination and and affirmative action regulations for individuals with disabilities (IWDs)
      • Regulations require federal contractors to achieve a 7%  workforce utilization goal of IWDs. 
      • The contractors are required to achieve the 7% in each and every job group of the contractors.
    Why Success Is Important

    The long-term fiscal stability of the United States of America depends, in part, on ensuring that Americans with disabilities have meaningful opportunities to contribute to our collective well-being and on eliminating outdated policies that keep people in cycles of poverty and dependency.

    More than two decades after the passage of the ADA, the unemployment rate for Americans with disabilities stubbornly remains nearly double that of people without disabilities, while their rate of labor force participation has continued to be abysmally low. Figures from the Bureau of Labor Statistics show that labor force participation for workers with disabilities was 20.3 percent, while the total for workers without disabilities was 69.1 percent—more than three times higher. As of April 2012, the unemployment rate for people with disabilities was 12.5 percent, versus 7.6 percent for those without disabilities.

    There are many benefits of employment—work enhances skills such as communication, socialization, academics, physical health, and community skills; it factors into how one is perceived by society; it promotes economic well-being; it leads to greater opportunity for upward mobility; and it contributes to greater self-esteem. Yet only 15 percent of those with a mental disability are in the labor market. Please see So Many Job Openings, So Little Hiring.