* * * * * * *
Complex and opaque, algorithms generate the predictions, recommendations and inferences for decision-making in a data-driven society. While easily dismissed as abstract empirical processes, algorithms are deeply human. They reflect the intentions and values of the individuals and institutions which design and deploy them. The ability for algorithms to augment existing power asymmetries gives rise to debates on their influence over data-driven policy-making.
The world of “smart” environments, where cars, eyeglasses and just about everything else coalesce into the Internet of Things, creates a sea change in how data will be processed. Rather than being based on “interactive” human-machine computing, smart environments rely upon “proactive computing”. By design, these proactive environments are one step ahead of individuals. Connected cars need to anticipate accidents before they happen. Evacuating flood prone areas needs to occur before major storms hit.
The emphasis on proactive computing will change the role of human intervention from a governance perspective. Lacking a full understanding of how complex systems work, the ability of humans to understand, make decisions and adapt can be too slow, incomplete and unreliable. In this brave new world, building trust from the “principles up” will be essential and require new forms of governance that are open, inclusive, self-healing and generative.
From a community and societal perspective, as civil “regulation-by-algorithm” begins to scale, incumbent interests and power asymmetries will play an increasing role in establishing who gets
access to an array of commercial and governmental services. As such, there is a need to ensure that the algorithms driving proactive and anticipatory decisions will be lawful, fair and can be explained intelligibly. Meaningful responses must be given “when individuals are singled out to receive differentiated treatment by an automated recommendation system”.
One emerging set of concerns is the institutional ability “to discover and exploit the limits of an individual’s ability to pursue their own self-interest.” Given that a majority of consumer interactions in the future will be mediated via devices and commercially oriented communications platforms, data-centric institutions will have the means and incentives to trigger “predictable irrationality”
With a vast trail of “digital breadcrumbs” accessible for companies to mine and tailor highly personalized experiences, a growing set of concerns is arising on how individuals could be profiled and targeted at moments of key vulnerability (decision fatigue, information overload, etc.) and limit their ability to act with agency and in their own self-interest. With the lives of individuals becoming increasingly mediated by algorithms, a richer understanding is needed for how people adapt their behaviors to empower themselves and gain more control over the manner of how profiles and algorithms shape their lives in areas such as credit scores, retail experiences, differential pricing, reputational currencies, insurance rates, etc.
One of the most strategic insights on strengthening trust is the concept of exploring ways to share intended consequences of data usage to individuals. For example, the 2012 Draft European Data Protection Act (section 20), calls for “the obligation for data controllers to provide information about the envisaged effects of such processing on the data subject”.
incentives shaping outcomes at the individual and social level. They would need to develop some form of legal protection to limit liabilities and provide a safe space to explore complex issues in a
real-world setting. One attribute of this safe space would be for it to be governed by an institutional review board where ethics and the interests of individuals could have a meaningful and relevant voice (similar to how they are used by the biomedical and behavioural science sectors). Institutions concerned about legal uncertainties, regulatory action or civil lawsuits could have a richer means for assessing ethical concerns using these approaches.