Sunday, June 22, 2014

Decision-by-Algorithm: No Silver Bullet

This post contains excerpts from "The Scored Society: Due Process for Automated Predictions," a 2014 law review article authored by Danielle Keats Citron and Frank A. Pasquale III. 


* * * * * * *

Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess individuals as good credit risks, desirable employees, reliable tenants, and valuable customers. People’s crucial life opportunities are on the line, including their ability to obtain loans, work, housing, and insurance.

The scoring trend is often touted as good news. Advocates applaud the removal of human beings and their flaws from the assessment process. Automated systems are claimed to rate individuals all in the same way, thus averting discrimination. But this account is misleading. Human beings program predictive algorithms. Their biases and values are embedded into the software’s instructions, known as the source code, and predictive algorithms.  Please see What Gets Lost? Risks of Translating Psychological Models and Legal Requirements to Computer Code.

Credit scoring has been lauded as shifting decision-makers’ attention from troubling stereotypes to bias-free assessments of would-be borrowers’ actual records of handling credit. The notion is that the more objective data at a lender’s disposal, the less likely a decision will be based on protected characteristics like race or gender. But far from eliminating existing discriminatory practices, credit-scoring algorithms are instead granting them an imprimatur, systematizing them in hidden ways.

A credit card company uses behavioral-scoring algorithms to rate consumers a worse credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services. Online evaluation systems score interviewees, with color-coded rating of red signaling a “poor candidate,” yellow as middling, and green as “hire away.”

Beyond biases embedded into code, some automated correlations and inferences may appear objective, but may in reality reflect bias. Algorithms may place a low score on occupations like migratory work or low paying service jobs. This correlation may have no discriminatory intent, but if a majority of those workers are racial minorities, such variables can unfairly impact consumers’ loan application decisions.

Credit scores are only as free from bias as the software and data behind them. Software engineers construct the datasets mined by scoring systems; they define the parameters of data-mining analyses; they create the clusters, links, and decision trees applied. They generate the predictive models applied. The biases and values of system developers and software programmers are embedded into each and every step of development.

Just as concerns about scoring systems are heightened, their human element is diminishing. Although software engineers initially identify the correlations and inferences programmed into algorithms, Big Data promises to eliminate the human “middleman” at some point in the process.

According to a January 9, 2014 article in CIO.com, IBM says cognitive computing systems like Watson are capable of understanding the subtleties, idiosyncrasies, idioms and nuance of human language by mimicking how humans reason and process information.

Whereas traditional computing systems are programmed to calculate rapidly and perform deterministic tasks, IBM says cognitive systems analyze information and draw insights from the analysis using probabilistic analytics. And they effectively continuously reprogram themselves based on what they learn from their interactions with data.

Said IBM CEO Ginni Rometty, "In 2011, we introduced a new era [of computing] to you. It is cognitive. It was a new species, if I could call it that. It is taught, not programmed. It gets smarter over time. It makes better judgments over time." "It is not a super search engine," she adds. "It can find a needle in a haystack, but it also understands the haystack."

This "new species" of computing has its challenges. According to "IBM Struggles to Turn Watson Computer Into Big Business," a recent Wall Street Journal article:
Watson is having more trouble solving real-life problems than "Jeopardy" questions, according to a review of internal IBM documents and interviews with Watson's first customers. 
For example, Watson's basic learning process requires IBM engineers to master the technicalities of a customer's business—and translate those requirements into usable software. The process has been arduous.
Klaus-Peter Adlassnig is a computer scientist at the Medical University of Vienna and the editor-in-chief of the journal Artificial Intelligence in Medicine. The problem with Watson, as he sees it, is that it’s essentially a really good search engine that can answer questions posed in natural language. Over time, Watson does learn from its mistakes, but Adlassnig suspects that the sort of knowledge Watson acquires from medical texts and case studies is “very flat and very broad.” In a clinical setting, the computer would make for a very thorough but cripplingly literal-minded doctor—not necessarily the most valuable addition to a medical staff.

As Hector J. Levesque, a professor at the University of Toronto and a founding member of the American Association of Artificial Intelligence, wrote:

 "As a field, I believe that we tend to suffer from what might be called serial silver bulletism, defined as follows:
the tendency to believe in a silver bullet for AI, coupled with the belief that previous beliefs about silver bullets were hopelessly naıve. 
We see this in the fads and fashions of AI research over the years: first, automated theorem proving is going to solve it all; then, the methods appear too weak, and we favour expert systems; then the programs are not situated enough, and we move to behaviour-based robotics; then we come to believe that learning from big data is the answer; and on it goes."

Similarly, employment assessment companies have marketed the benefits of "science, precision and data" over the past fifteen years under the guise of neural networks, artificial intelligence, big data and deep learning, yet what has changed? Employee engagement levels have hardly budged and employee turnover remains a continuing and expensive challenge for employers. Please see Gut Check: How Intelligent is Artificial Intelligence?









No comments:

Post a Comment

Because I value your thoughtful opinions, I encourage you to add a comment to this discussion. Don't be offended if I edit your comments for clarity or to keep out questionable matters, however, and I may even delete off-topic comments.