Code and Codification: the Possibility of Rights-Respecting Artificial Intelligence

An essay adapted from a speech presented at the International Colloquium - Artificial Intelligence Ethics: From Emerging Standards to Perspectives of Implementation (22-23 July 2022, Rabat, Morocco).

What is meant by “Code and Codification: the Possibility of Rights-Respecting Artificial Intelligence”? In choosing the title for this essay, I was reminded of a keynote address at a colloquium given by French philosopher Jacques Derrida where he was asked to speak on the topic of deconstruction and the possibility of justice. His address, titled “Force of Law: The Mystical Foundation of Authority”, amongst other things, noted the intimate relationship between law and the enforceability of law. Authority grants one the ability to enforce and apply law, if necessary, by force. As Derrida’s discussion unfolds, he unpacks this relationship between law and enforceability moving to assess what might be considered legitimate (and illegitimate) enforcement of law. He also considers what might be deemed just and unjust actions. To bring Derrida’s address to the artificial intelligence (AI) question, we note that as a technology, AI is unique in that it carries with it a set of algorithmic norms, represented as code, which I call the ‘law of the algorithm’. This law is increasingly applied to govern social situations and settings and brings with it its own form of enforceability of law. 

Derrida’s discussion is thus relevant today, and to global debates on AI ethics, because the law of the algorithm is increasingly taking on a disciplinary function in the way it interacts with and influences our thoughts, habits and feelings. In this way, the law of the algorithm potentially conflicts with other legalised norms, including the human rights and ethical standards already codified as law in the form of statutes, legislation and regulation. Indeed, the global AI community is facing similar predicaments around determining exactly what constitutes legitimacy and justness in considering how best the seemingly unstoppable force of AI’s ‘law’ can interact with (what should be) the immovable object that is the respect, protection, promotion and fulfilment of fundamental human rights and values. The question then becomes “how do we appropriately balance this unstoppable with the immovable?”.

The first point I wish to make is that the common comparison of AI to a double-edged sword, insofar as it can be both helpful and harmful, may not be entirely accurate. The more I come to understand what the law of the algorithm is, how it is operating and the authority it carries with it, the more I begin to ask myself whether it is not, in fact, more accurate to describe AI as inconsistent with a rights-respecting state of existence. That this problem exists is becoming more clear from my own research, which explores the way the law of the algorithm is impacting our human autonomy and independent decision-making abilities (a fundamental principle of contemporary human rights). This inconsistency reveals itself more clearly when we attempt to unpack algorithmic first principles in more detail.

In framing these first principles I will not attempt to give a comprehensive single definition of what AI is - increasingly, the word ‘AI’ has become much like ethical concepts of truth and dignity where it can mean different things to different people depending on the contexts in which it is applied, the agendas one seeks to advance, and how its meaning is explored or interrogated. For the purposes of this essay, at a very high level, it is safe to broadly frame AI as the capability that exists between input data provided by its designers, developers and, indeed, most of us depending on AI systems today (as data subjects), and output data, being the results, learnings, predictions and expectations derived from the algorithmic method. This really is the true magic of the technology under enquiry.

It is this algorithmic capability that we as ethicists and human rights practitioners need to dedicate more resources to fully understanding - indeed, it is this ‘code’ which has, for the most part, already been codified into the digital world that ethicists now seek to apply ethical guidelines and principles to, in the form of policy and legislation. As time progresses, the task of codifying specific code into regulatory instruments becomes increasingly more difficult as more and more documents are being generated that seek to describe what ethical principles of AI should involve. As an example, a landscape study in 2019 revealed this number to be 84 different documents, with far more subsequently having been released. Most of these documents find common ground with variations of requirements of fairness, accountability, integrity, responsibility and transparency in the design, development, deployment and governance of AI. According to the current AI ethics discourse, these principles then represent the ideals that make rights-respecting AI a possibility.

The caveat, however, is that we really need to be asking ourselves whether, fundamentally, AI is indeed capable of being consistent with these ideals. If we consider Bayes’ theorem, competition theory or utility theory as some of the core theories of algorithmic design and development, we can note the following: 

  1. Bayes’ theorem: looks to determine whether a particular state of the world is true by weighing expected costs and benefits, which are oftentimes weighed using financial considerations; put differently Bayes’ theorem is about probability and inferences based on the likelihood of particular events occurring, given the available evidence, 
  1. game theory: is based on mechanisms of prices and products, and the gaining and spending of incomes – it is a theory about games of strategy in a social exchange economy that try to obtain the optimum result, being maximum resulting satisfaction, and 
  1. rationality: in the AI space, utility theory pins rationality to the maximisation of expected utility. 

From this admittedly rudimentary understanding, we can determine that a philosophy of competition, prediction and performance is inherent in the algorithmic method. Indeed, I would argue that the first principle of AI is about harnessing the efficiencies and patterns its designers can extract, according to their own preferences of economic utility. Here I am not talking about intentionality – knowingly or not, the theories and principles designers rely on to develop their systems are founded on this point of view and it is these theories and principles that we should not leave out of our ethical analysis of AI. It follows that efficiency is the tenet that needs to be weighed against non-derogable human rights. This statement is itself a contradiction because how can we even measure something against the non-derogable? Yet, this is the challenge we face in achieving a rights-respecting AI.

Creating space for the weighting of theories of efficiency against those of rights to take place is exactly what the law of the algorithm accomplishes: in any social setting, AI can only function by reducing human values, norms and principles to the sum of 1s and 0s and not to principles such as fairness and equality. Indeed, early AI pioneers such as Alan Turing, with the Turing test, and John von Neumann in his book The Computer and the Brain, advanced philosophies that reduce biological functions to a series of 1s and 0s thus allowing them to equate the biological with the computational. This kind of view introduces uncertainty about what life means and can also be used to ascribe human rights to human-like machines, potentially at our own (human) expense. We should never confuse the point that humans do not work like AI; it is the AI that we are modelling to try and work like us. In my opinion, this conflation downplays the importance of human rights respecting AI and we need to be aware of rhetoric that ultimately seeks to muddy the waters in order to develop the certainty it needs to advance certain AI agendas. 

Indeed, one of the key reasons why our rights are at risk is that AI has an almost limitless digital capability to measure, predict (and direct) our actions, from the patterns it is able to extrapolate. This distinguishes AI from the technologies that came before it in prior industrial revolutions, as, previous technologies primarily looked to exploiting physical processes that harnessed and extracted from nature and the natural world (i.e. transport to move faster from A to B, mining to extract minerals from the ground, and agricultural advancements to feed growing populations). AI’s unique feature, and what sets it apart from other technologies, is that it is also capable of exploiting digital patterns (in the form of the data we generate in our day-to-day activities) that harness and extract from our own human nature (i.e. the way we think or the impulses we have (and the frequency thereof)). To extract from human nature means that AI is able to extract directly from the very rights we seek to protect (in the form of our privacy, our dignity, and our human autonomy). Even if read with principles for responsible use, if the fundamental theories the designers rely on do not first take into consideration rights-respecting principles, a human-centric approach, is, in all scenarios, most likely an afterthought and not an imperative. I can quite confidently make this statement because AI principles and rights-respecting principles are rooted in inherently different concepts – the former with optimisation and efficiency and the latter with the fundamental idea of what it means to be equal and dignified humans. 

This point is relevant even when people are not the focus of AI. In Kantian philosophy, freedom is attained through the autonomy of the will – that is,  the ability to independently choose between one decision or another. If we take this to be an accurate articulation of human (ethical) autonomy, then with the rise in machine autonomy and decision-making, there is a critical corresponding decrease in human autonomy and decision-making as we outsource this very function to AI systems. This is more obvious in surveillance technologies, or social media and e-commerce, where design choices are deliberately manipulative, steering us down a path of least resistance and most susceptibility. But even, for example, in a medical diagnostics programme or an agtech (agricultural technology) solution, over time there will be less reason to doubt the decisions made by AI systems and more immediate dependence on the direction they provide. In Kantian terms, this represents the loss of freedom and a movement towards heteronomy. Indeed, in Homo Deus, Yuval Noah Harari speaks of the emergence of the data religion society and the possibility of a ‘useless’ class emerging – in my view, the ethical concern with this ‘uselessness’ is represented as the gradual intellectual deadening of society through the way in which AI systems potentially limit our own autonomy and critical thinking abilities.¹ 

This brings me to my closing thought - it is often argued that cautious views of AI development may be considered to be non-progressive or stand in the way of societal advancement. When this point is raised, I constantly find myself referring back to a passage by C. S. Lewis. In describing the belief in Progress, Lewis writes: 

“any given generation is always [deemed] in all respects wiser than all previous generations. To those who believe thus, our ancestors are superseded and there seems nothing improbable in the claim that the whole world was wrong until the day before yesterday and now has suddenly become right. With such people I confess I cannot argue, for I do not share their basic assumption. Believers in progress rightly note that in the world of machines the new model supersedes the old; from this they falsely infer a similar kind of supercession in such things as virtue and wisdom.” (Lewis, 2003).

The development of AI cannot leave behind traditional rights and value systems, which may stand in its way or seem antiquated. This is especially important in an African context where such development may be perpetuating existing legacies of colonialism. Moreover, much can be learned from the values contained in traditional systems. For example, authors such as Sabelo Mhlambi have written on how AI governance could be grounded on the principle of Ubuntu², focusing on relational understandings of personhood as opposed to the algorithmic rationality, which, in Mhlambi’s view, is based on a “traditional Western conception of personhood [that] has been used to support Euro-American claims to superiority” (Mhlambi, 2020). Indeed, there is much wisdom to be gained in pausing and discerning that the way AI development is advancing may not be how it should be advancing.  

Following the above, it must be stressed that even though AI applications are increasingly applicable to solving more real-world problems, their core principles effectively remain unchanged. Indeed, Bayes’ theorem was first conceived in the 1700s and game theory in the 1950s - the literature points to how these basic principles of AI have remained unchanged for almost 70 years. We should be mindful, therefore, that whilst existing AI models are continually being built upon and superseded by more complex models, the algorithmic principle behind them nevertheless remains the same. Indeed, even the most advanced AI applications (e.g. Google’s image recognition system which has over 300,000 parameters) started their development journey with a single line of code and a formative method that may be inconsistent with the rights and freedoms that human rights practitioners have worked to codify at global, regional and national levels. If foundational principles of AI are not seriously interrogated and the focus of ethical scrutiny, then the possibility of rights-respecting AI will remain a pipedream and our attempts to bring fairness and accountability to the digital world will likely remain noise to the progression of AI.

_________________________________________

¹ If you are interested in exploring this perspective in more detail then, in addition to Harari’s work, I would suggest Frischmann and Selinger’s Re-Engineering Humanity, Kant’s Groundwork for the Metaphysics of Morals, and Malhotra’s Artificial Intelligence and the Future of Power: 5 Battlegrounds.

² Ubuntu is a humanist African philosophy which espouses humanity to others, or, 'I am because you are' (Ifejika, 2006).