Netherlands

Systeem Risico Indicatie (SyRI)

Original transcript of the case in English: https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878.

Summary:

On the 13th February 2020, the Court of the Hague halted the use of Systeem Risico Indicatie (SyRI), a machine-learning risk-scoring system which predicted the risk of tax non-compliance associated to individual welfare recipients in the area of social security and so-called ‘income-dependent schemes’.

The Court found that the SUWI wet, the legislation which regulated the use of SyRI did not offer sufficient insights to welfare recipients into the risk-indicators used in the model, as well as the functioning of the machine-learning model. On that basis, the Court ruled that the SUWI wet was contrary to Art. 8(2) of the European Convention on Human Rights (ECHR).

Facts of the case:

SyRI is the first litigation where the use of machine-learning algorithms by the tax administration of an EU Member State was scrutinized before a court.

Several civil society interest groups joined the litigation, including the Dutch Section of the International Commission of Jurists (Nederlandse Juristen Comité voor de Mensenrecht), The National Client Participation Council (Landelijke Cliëntenraad), the Privacy First Foundation (Stichting Privacy First), two private individuals and the Netherlands Trade Union Confederation (Federatie Nederlandse Vakbeweging). 

These associations claimed that Article 64 and 65 of the wet structuur uitvoeringsorganisatie werk en incomen (SUWI wet), the law implementing the organization of social security and income-dependent schemes, was in breach of a number of provisions of the ECHR, most notably the right to privacy and the right to non-discrimination.

Article 64 of the SUWI wet authorized the ‘linkage of records’ held by any Dutch administrative or governmental bodies. In other words, it authorized the Dutch tax administration to process the data of any administration or governmental bodies, whether local, regional or national, to detect and prevent social security non-compliance.

To process such voluminous bulk of data, Article 65 of the SUWI wet provided the integration of SyRI, a machine-learning classifier meant to detect signals of non-compliance and select taxpayers for audit. The algorithm processes previously labelled cases of non-compliance to derive risk-indicators and develop a scoring grid to select taxpayers for audit. As advanced by the claimants, Article 64 provided a literal carte blanche to the Dutch tax administration (Belastingdienst) and offered no protection against potential abuses.

In fact, the risk of ‘linkage of records’ materialized in the toeslagenaffaire as the erroneous output of a risk-scoring algorithm was shared with other public and private bodies. In turn, taxpayers wrongfully labelled as fraudsters were also radiated from their banks, targeted by policy or even lost custody of their children.

Moreover, the claimants in the SyRI case argued that Art. 65 of the SUWI wet, which  authorized the use of the risk-scoring system did not offer any protection against the risks of biases and discrimination. In particular, the claimants outlined how SyRI was trained based on data from taxpayers in so-called ‘problematic areas’ of Rotterdam, where the proportion of foreign residents was much higher in comparison to the general population of taxpayers. In turn, such biased sampling method was very likely to generate biased output, disproportionately prejudicial to foreign residents.

With hindsight, the claimants’ concerns were more than justified seeing how this exact risk materialized in the toeslagenaffaire.

Ruling :

The Court sided with the claimants and found that in its present form, Articles 64 and 65 of the SUWI wet did not offer sufficient safeguards to comply with the right to a private life. The Court started by emphasizing that the fight against fraud is a quintessential function of the State, which could therefore warrant the use of a very wide range of data.

However, the court acknowledged that machine-learning algorithms bear an important risk of discrimination that may only be neutralized with specific safeguards. The court particularly emphasized the need to provide citizens with verifiable insights into the functioning of the algorithms and the risk-factors used by the model – so-called transparency in the interest of verifiability’. The Court found that, in its present form, Article 65 of the SUWI wet did not contain such description of the model and did not offer insights into the model used, and thereby violated Article 8(2) of the ECHR.

Key takeaways:

As the very first case on the use of AI by tax administrations, the ruling in SyRI provides two important lessons for the legality of fiscal algorithmic governance in the EU:

  1. The principle of legality and fundamental rights require that the use of AI, particularly risk-scoring algorithms, by tax administrations be authorized by a formal legal basis that provides an accurate description of the AI system and its features.
  2. More importantly, the case of SyRI highlights that the formal legal basis should obey to a high threshold of quality, providing taxpayers with ‘verifiable insights’ into the functioning of the algorithm and the risk-indicators used in the model. In practice, this is no easy feat and goes against the current status quo on the transparency of AI systems of tax administrations. The doctrine of ‘transparency in the interest of verifiability’ is a reminder that transparency is the rule and secrecy the exception, conversely to what may be presently observed.

 

Share this publication

More Publications

European Union

AI for Revenue Agencies – EU Overview

This page provides a high-level summary of our individual country reports and disaggregated data on the use of AI by...
Slovakia

The eKasa case

The 17th December 2021, the Slovak Supreme Constitutional Court ruled that a State-mandated electronic cash register system which obliged entrepreneurs...