Number 41 (december 2018)

Artificial intelligence and algorithmic transparency: "it's complicated"

 

[Versió catalana][Versión castellana]


Ramon Sangüesa

DESISLAB, Social Innovation Laboratory
ELISAVA School of Design and Engineering

 
 
 

 

Today, we are subjects of a society that operates to a considerable degree as an information economy in which algorithms transform raw (i.e., worthless) data into processed (i.e., useful) content. Indeed, this economy is defined by its implementation of a surveillance society through algorithms and networks (Zuboff, 2016). In turn, this notion of automated surveillance has a scientific and technical background that can be traced to the logic of cybernetic thinking, an approach that sought to develop the science of controlling machines and living things (Wiener, 1961).

Since the expansion of cybernetic thinking and its successive transformations (Tiqqun, 2018), the digitisation of almost everything, and the acceleration of global flows of information and the resulting power, we have reached a point at which a greater, more complex information processing capacity can be used. In this acceleration, we have passed through interconnection via the Internet, the fusion of global digital contents in the Web, the accumulation and cross-referencing of all kinds of data in Big Data, new connection between means of data capture in the Internet of Things, and now the autonomisation of data analysis processes and automatic decision-making based on models extracted from the data analysis. In other words, we are now at the stage when numerous systems of artificial intelligence (AI) are interconnected.

Fewer than sixty years ago, competition between nations was set out in terms of their capacity to create scientific and technological knowledge. Now, global powers with hegemonic aspirations form plans to excel and advance in terms of artificial intelligence or, put another way, in terms of coding and automation of knowledge through cognitive technologies, of which AI is the most representative. It is no surprise that those who wish to become major powers have drawn up strategic AI national plans (Chinese State Council, 2017; NSTC, 2016; Villani, 2018; Hogarth, 2018).

There are two incentives for gathering vast quantities of data of all kinds: economic and political. Power is associated with data and data processing. Hence, data and algorithms, with the help of artificial intelligence and specifically machine learning, have developed from approaching data initially for analytical purposes, that is, using algorithms to understand what the data say, to prediction or anticipation, and finally to clearly prescriptive action, that is, guiding the behaviour of millions of people through what has been found out about them and their context in predictive and classification models. In fact, all these perspectives now coexist and are mutually dependent. Together, they make posible a form of economics and politics based on controlling demand and shaping individual behaviour at planetary scale (Turkle, 2006).

This is clear, for example, in recommender systems that might present us with a new service, recommend certain cultural offerings, or even influence us to support certain political options. Search engines have a similar function, based on the personalisation of search results. We have shifted from broad access to certain cultural contents to the configuration and stabilisation of hegemonic cultural identities (Pasquale, 2015a). The fact that over 80% of searches are carried out using US portals cannot fail to play a role in habits of cultural consumption and configuration, for example. This is not only the case for culture: it applies to many other areas of activity, as seen in the recent political manipulation scandal involving Facebook and Cambridge Analytica (Grassegger; Krogerus, 2018). Equally, it is increasingly clear that the combination of data and algorithms induces and propagates mechanisms of bias, discrimination, and unfair, unequal treatment (Eubanks, 2018).

Faced with this state of things, there is some consensus that alternative scenarios should be created. There are variations within this consensus. However, in one way or another, all want to restore the agency of certain subjects given the abuse of power and asymmetry in data capture capabilities, data processing, interpretation, and decision. Common ground in variations of this consensus is the demand for transparency.

Transparency of data and algorithms, known as algorithmic transparency, involves the capacity to determine which data are used, how they are used, who uses them and why, and how decisions are based on these data that affect the living environment of those who demand transparency. If, for example, a person’s formal application or request for something  has been declined in any given process (for example, in a grant or loan application) they should know which data were used to make this decision and how it was decided that their request would be declined, which is a different issue. Equally, this information should be available if a recognition system has classified someone as a suspected terrorist. Today, an informed public sphere must be comprised of agents who can determine the subtext of the algorithmic universe in which citizens live as economic and political subjects.

This first level of transparency, addressed in very recent, commendable initiatives such as European data protection regulations, is only a first, tentative step. Usually, it is argued that transparency should include access not only to the data, but also to the code of the algorithms that are used to process them. However, although this is essential, it is still not quite enough. First, it ignores the fact that this kind of access is not always a guarantee of understanding. A code that is accessed in this way may be incomprehensible to both non-experts and experts in data, algorithms, and artificial intelligence. When we access the code of many algorithms that have been “opened”, they continue to be real “black boxes” (Pasquale, 2015b). We should bear in mind that 95% of normal software code is never executed. One of the reasons is that previous programmers developed it for reasons that are not always documented in the code itself. As a precautionary measure, it is better not to risk unknown effects by trying to modify the code. Professional routines in a highly competitive, very fast environment influence this practice, which increase incomprehension of the code. Evidently, in some cases there may be a malicious aim to create an algorithmic code that has clearly adverse effects. What is clear is that, whether the intention behind an algorithm is good or bad, its comprehension continues to be a problem.

In fact, even if we manage to understand the code, it can still be hard to determine why it has behaved in a certain way. This is particularly notable in the case of certain machine learning algorithms that use neural networks (based on what is known as deep learning). If we access the code of this type of system, we may find that even the experts who programmed it cannot explain aspects of its behaviour. If we managed to understand it, there would still be several other challenges ahead. One challenge would be to communicate clearly and intelligibly to whoever needed to know “how” what had happened did happen, that is, explain why the system behaved in a certain way. Here there is a major problem of language and translation. Experts can inform each other of abnormal or appropriate behaviour resulting from training a machine learning system, but how can we communicate its application in a specific case to the general public? One solution for these situations would be systems that could interpret the behaviour of algorithms generally or specifically and could provide explanations in a language that was more appropriate to the person or group affected. This is a broad field of work that is full of difficulties (DARPA, 2018) and is always skirting the infinite regression of explanations from one language to the next.

If we managed to overcome this difficulty, there would still be another obstacle. It is one thing to describe the causal mechanism that led to a decision, but quite another to find a justification for it. These are two different levels. Would we accept the justification that “the algorithm has not given us a loan” because if it did the bank’s risk would increase? Would we accept that “the algorithm has not given you a grant” because it favours minorities who are systematically excluded from the distribution of funds? These justifications appeal to other types of reasoning that are ethical, moral, or legal rather than technical. Aspects such as fairness come into play. This is a task that is difficult to retranslate into technical implementation. Initiatives such as the DTL (2018), FATML (2018), and DAT (2016) aim to find technical and methodological translations of the construction and training of artificial intelligence algorithms that can justify transparency, equity, justice, and traceability. This is not always achievable.

Frequently, many of these initiatives end up failing because the solution is not only found in the data and the algorithms, but also in the surrounding context of social and political practices. It says a lot about the relevance of the algorithmic metaphor in this era that the main centres of development of these technologies have taken so long to develop awareness outside of their technical work (Boyd; Crawford, 2011) and to propose more complex, interdisciplinary frameworks in which the technical solution would end up implementing ethical and legal frameworks designed to mitigate the risk. Perhaps the expansion of ethical committees on artificial intelligence replicates similar proposals to address other risk technologies (Beck, 2002), as occurred in the field of biology and the corresponding development of bioethics. The fact that professional associations in the field of artificial intelligence describe new design practices that incorporate ethical concepts also shows a certain amount of progression compared to the state of affairs we have described (IEEE, 2018).

What seems clear is that we are still at a relatively early stage in the consideration of artificial intelligence as another risk technology—which it is. In fact, according to the definition by Ulrich Beck, it is a type of technology that has a systemic impact and its risks and growth are distributed unevenly in the population. Clearly, the groups that have been most affected to date by decisions guided by artificial intelligence systems are precisely the most vulnerable (Eubanks, 2018).

The demand for transparency in this kind of system and technology should certainly begin to be expressed in public debates. In fact, this is already happening. However, as mentioned above, there is still a long way to go before we can have an informed, wide-ranging debate that involves all the affected population segments and overcomes the considerable problems of comprehension, translation and communication that we currently face.

 

References

Beck, U. (2002). La sociedad del riesgo: hacia una nueva modernidad. Barcelona: Paidós Ibérica.

Boyd, D.; Crawford, K. (2011). "Six Provocations for Big Data". A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society, 21 September 2011. Oxford Internet Institute.

Chinese State Council (2017). State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. Translation to English by New America Foundation. Retrieved from <https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/> on 03/03/2018.

DARPA (2018). Explainable Artificial Intelligence. Retrieved from <https://www.darpa.mil/program/explainable-artificial-intelligence> on 04/06/2018.

DAT (2016). Workshop on Data and Algorithmic Transparency, 19 de Novembre de 2016. Nova York, USA. <http://datworkshop.org>. [Consulta: 12/05/2018].

DTL (2018). Data Transparency Lab. Retrieved from <http://datatransparencylab.org> on 20/05/2018.

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press.

FATML (2018). Fairness, Accountability and Transparency in Machine Learning. Retrieved from <www.fatml.org> on 20/05/2018.

Grassegger, H.; Krogerus, M. (2018). Ich habe nur gezeigt, dass es die Bombe gibt. Interview with Michal Kosinski. Retrieved from <https://www.dasmagazin.ch/2016/12/03/ich-habe-nur-gezeigt-dass-es-die-bombe-gibt/> on 04/06/2018.

Hogarth, I. (2018). AI Nationalism. Retrieved from <https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism> on 20/06/2018.

IEEE (2018). IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, (EADv2).

NSTC (2016). Networking and Information Technology Research and Development Subcommittee. The National Artificial Intelligence Research and Development Strategic Plan. National Science and Technology Council.

Pasquale, F. (2015a). "The Algorithmic Self". The Hedgehog Review. Vol. 17 No. 1 (Spring 2015).

Pasquale, F. (2015b). The Black Box Society. The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press.

Robert M. et al. 2012. "A 61-Million-Person Experiment in Social Influence and Political Mobilization". Nature, 489 (7415) p. 295–298.

Tiqqun (2018). La hipótesis cibernética. Retrieved from <https://tiqqunim.blogspot.com/2013/01/la-hipotesis-cibernetica.html> on 07/06/2018.

Turkle, S. (2006). "Artificial Intelligence at Fifty: From Building Intelligence to Nurturing Sociabilities." Dartmouth Artificial Intelligence Conference, Hanover, NH, USA, 15 July 2006. Retrieved from <http://www.mit.edu/~sturkle/ai@50.html> on 04/06/2018.

Villani, C. (2018). For a Meaningful Artificial Intelligence. Towards a French and European strategy. Retrieved from <https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf> on 04/06/2018.

Wiener, N. (1961). Cybernetics: Or Control and Communication in the Animal and the Machine. 2nd ed. New York: MIT Press.

Zuboff, S. (2016). "The Secrets of Surveillance Capitalism. Google as Fortune Teller." 3 May 2016 Frankfurter Allgemeine Zeitung. Retrieved from <http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html> on 08/04/2018.

 

 

Similares

Temària's articles of the same author(s)

Sangüesa, Ramon

[ more information ]

llicencia CC BY-NC-ND Creative Commons licence (Attribution-Non-Commercial-No Derivative works). They may be consulted and distributed freely provided that the author and publisher are quoted (in accordance with the "Recommended citation" section in each of the articles). However, no derivative works (translation, change of format, etc.) may be made without the publisher’s permission. Therefore, it meets the definition of open access form the Budapest Open Access Initiative declaration. The journal allows the author(s) to hold the copyright without restrictions and to retain publishing rights without restrictions.