Use the buttons to browse through the AA articles archive or to find out more about the newspaper and distribution.
17/1/2020 / Issue #028 / Text: Jaromil and Niinja (Dyne.org)

The Algorithmic Sovereign

Introduction
Algorithms are growing in power and importance. While their logic is often invisible, their effects are manifest. This article looks at the power of algorithms and highlights some of the abuses and injustices such power often involves, particularly when it comes to cases where algorithms assess the reputation of people, make use of their private data or influence their life in general.
An algorithm is an automated sequence of instructions that processes large amounts of data, in order to then mark situations as positive or negative according to certain determined conditions. As we approach the year 2020, many vital decisions are taken by algorithms rather than humans. Even people with no access to computers are affected, while only a few know how algorithms work. A problem we see is that there is no space for public, social and political debate on how algorithmic rules are made and executed. The power of algorithms can only be negotiated by specialists, the most average “users” can hope for is a ceremonial click on the user agreement. “Users” have no say in the construction of these algorithms. Yet, they govern their lives while their mode of operation is often hidden in trade secrets and closed source software.

False Positives
One important example of the use of algorithms is risk assessment by the police. So called actuarial risk assessment instruments (ARAI) gather publically available data from sources such as travel records and surveillance camera footage; some may also gather Internet activity from social networks, taking into account who you’re “friends” with and so on. For such risk assessment systems a “false positive” is the name of an error: for instance the algorithm may mark someone as a dangerous criminal by mistake.

It’s important to consider that for an algorithm, the probability of an error may be a small mathematical fraction that from a statistical point of view is negligible. However, when such an error occurs in the real world, it potentially affects a person as a whole: it can be a matter of life and death. In case of Jean-Charles De Menezes, an algorithm making such an error led to his accidental execution.

This story is about the error of facial recognition systems, for which de Menezes was shot in London at Stockwell tube station on 22 July 2005: he was a “false positive”. What is disturbing is not just that an innocent man was killed, confused with a terrorist by the growing apparatus of surveillance cameras but also how law-enforcement made use of his image post-mortem on mainstream-media to claim the error was that of an algorithm, as if this would absolve police from any wrongdoing.

Contemporary security research focuses on automatic pattern recognition and prediction of human behaviour. Large-scale analysis can be exercised on the totality of data available about a person at unprecedented detail. But algorithmic models fail to incorporate the risks of systemic failure, plus they can hardly contemplate the ethical costs of killing an innocent human being. To keep campaigning for the privacy of individuals is rather pointless at this point: the real stake is how our societies are governed, how we rate people’s behaviour, what counts as deviance and what doesn’t, and how we act upon it.

System Risk Indication
Abuse of algorithmic power also happens closer to home. In June 2019, 1263 households in the Rotterdam neighbourhoods of Bloemhof and Hillesluis were marked as potential frauds. Their personal data – and those of their 25,000 neighbours – had been analysed by the Dutch government using SyRI (System Risk Indication), an algorithmic system designed to find ‘fraudulent citizens’. There are few limitations when it comes to the amounts of data the system is allowed to see and the techniques it uses to sift through data (data-mining, pattern recognition and so on) – techniques whose use one would expect to be restricted to security agencies. Citizens are analysed and fit into risk-profiles. This way anyone working with a local agency that has access to municipal or civil services can combine these data in the SyRI system and create a profile of each citizen. These profiles are later compared to risk profiles and people are marked as potentially fraudulent. What exactly makes a ‘fraudulent ’profile is unclear; but when someone’s mundane traits resemble that of the fraudulent profile, such as their water usage or education level, they could be marked.

Today we know that, despite infringing the privacy of tens of thousands of households and marking thousands as potential frauds, SyRI has not yet helped to find a single case of actual fraud.

The problem is that SyRI is exclusively applied in poor neighbourhoods. Residents of Amsterdam Zuid or brokers at the Zuidas won’t have their personal lives analysed and labeled. This bias makes little sense given the tremendous amount of fraud money circulating in and out of rich neighbourhoods through corporate tax evasion and real-estate money laundering.

SyRI is not about researching fraud within welfare systems. It is a screening system for households: a system to monitor and control the working class, redesigned to be both more efficient and obscure, as its code is not available for peer review or forensic analysis.
After mounting pressure from protests of local residents, civil rights organizations, and the labour union FNV, the Rotterdam project was cancelled in July 2019. However, SyRI itself is still considered to be a valid system by the government, and can be applied in new projects.

What about the Sleepwet?
We can find similar abuses of algorithmic power on the national level as well. It’s been a bit over a year since the controversial ‘Sleepwet’ got implemented, even after a referendum where a legal majority of voting citizens expressed concerns and the will to change it.

The Sleepwet now allows the intelligence agencies direct access to all information data-bases, exchange with foreign agencies, 3 year storage of collected data and more.

Shortly after the successful referendum, the responsible minister was found to have held back a critical report on the data exchange with foreign agencies, which was one of the most controversial elements of the Sleepwet. Since the Sleepwet went into effect, the supervisory commission ‘CTIVD’ has published several sharp criticisms of the security services, showing that up to today the security agencies are incapable of following even the lenient Sleepwet laws to protect people’s privacy. This means that that while the current legislation on algorithmic power has been rejected by a majority of citizens as too weak, the Dutch government is willing to go even further and break its own law.

Beyond the empire of algorithmic profits
Public institutions should use their algorithmic power to facilitate the transparency of societal processes rather than enforce secrecy and surveillance.  We need to facilitate the understanding of algorithmic rules: to facilitate participation and inclusion; we need to empower people to appeal algorithmic decisions and to intervene on critical situations.

At Dyne.org we develop algorithms ourselves and our call to action is for fellow developers out there: we need to write code that is understandable for everyone. Good code is not just skillfully crafted or most efficient: the most valuable code is what can be read by everyone, studied, changed and adapted. Common understanding of algorithms is necessary so that our lives are not left in the hands of a technical elite.


Illustration: “Control Pokemon” by Pawel Kuczynski (2016)