Saturday, June 14, 2014

Algorithms: Deeply Human Choices Behind Cold Mechanisms

This post is comprised of excerpts and a graphic from Rethinking Personal Data: A New Lens for Strengthening Trust, a document published by the World Economic Forum and  prepared in collaboration with A.T. Kearney. The document addresses the key trust challenges facing the personal data economy, and offers a set of near-term and long-term insights for addressing these issues.  

* * * * * * *

Complex and opaque, algorithms generate the predictions, recommendations and inferences for decision-making in a data-driven society. While easily dismissed as abstract empirical processes, algorithms are deeply human. They reflect the intentions and values of the individuals and institutions which design and deploy them. The ability for algorithms to augment existing power asymmetries gives rise to debates on their influence over data-driven policy-making.

The nature of these debates are complex, value-laden and give rise to some fundamental societal choices. Questions of individual autonomy, the sovereignty of individuals, digital human rights, equitable value distribution and free will are all a part of these conversations. There are no easy answers. Through this long-term lens on the impact of proactive computing, the focal point for discussion begins to shift away from personal data, per se, to computer-based profiles of individuals and groups of individuals. These profiles — fueled by fine-grained behavioral and sensor data — make it possible to monitor, predict and instrument social phenomena at the micro and macro levels.

The world of “smart” environments, where cars, eyeglasses and just about everything else coalesce into the Internet of Things, creates a sea change in how data will be processed. Rather than being based on “interactive” human-machine computing, smart environments rely upon “proactive computing”. By design, these proactive environments are one step ahead of individuals. Connected cars need to anticipate accidents before they happen. Evacuating flood prone areas needs to occur before major storms hit.

The emphasis on proactive computing will change the role of human intervention from a governance perspective. Lacking a full understanding of how complex systems work, the ability of humans to understand, make decisions and adapt can be too slow, incomplete and unreliable. In this brave new world, building trust from the “principles up” will be essential and require new forms of governance that are open, inclusive, self-healing and generative.

From a community and societal perspective, as civil “regulation-by-algorithm” begins to scale, incumbent interests and power asymmetries will play an increasing role in establishing who gets
access to an array of commercial and governmental services. As such, there is a need to ensure that the algorithms driving proactive and anticipatory decisions will be lawful, fair and can be explained intelligibly. Meaningful responses must be given “when individuals are singled out to receive differentiated treatment by an automated recommendation system”.


One emerging set of concerns is the institutional ability “to discover and exploit the limits of an individual’s ability to pursue their own self-interest.” Given that a majority of consumer interactions in the future will be mediated via devices and commercially oriented communications platforms, data-centric institutions will have the means and incentives to trigger “predictable irrationality”
from individuals.

With a vast trail of “digital breadcrumbs” accessible for companies to mine and tailor highly personalized experiences, a growing set of concerns is arising on how individuals could be profiled and targeted at moments of key vulnerability (decision fatigue, information overload, etc.) and limit their ability to act with agency and in their own self-interest. With the lives of individuals becoming increasingly mediated by algorithms, a richer understanding is needed for how people adapt their behaviors to empower themselves and gain more control over the manner of how profiles and algorithms shape their lives in areas such as credit scores, retail experiences, differential pricing, reputational currencies, insurance rates, etc.

One of the most strategic insights on strengthening trust is the concept of exploring ways to share intended consequences of data usage to individuals. For example, the 2012 Draft European Data Protection Act (section 20), calls for “the obligation for data controllers to provide information about the envisaged effects of such processing on the data subject”.

To address this emerging set of concerns, establishing a cross-disciplinary community of forward-looking experts, complexity scientists, biologists, policy-makers and business leaders with an appreciation of the long-term societal impact was identified as a priority. This group would proactively help design and test systems that balanced the commercial, legal, civil and technological
incentives shaping outcomes at the individual and social level. They would need to develop some form of legal protection to limit liabilities and provide a safe space to explore complex issues in a
real-world setting. One attribute of this safe space would be for it to be governed by an institutional review board where ethics and the interests of individuals could have a meaningful and relevant voice (similar to how they are used by the biomedical and behavioural science sectors). Institutions concerned about legal uncertainties, regulatory action or civil lawsuits could have a richer means for assessing ethical concerns using these approaches.

No comments:

Post a Comment

Because I value your thoughtful opinions, I encourage you to add a comment to this discussion. Don't be offended if I edit your comments for clarity or to keep out questionable matters, however, and I may even delete off-topic comments.