Dutch childcare benefits scandal sends warning sign to Europe
Via AI: Decoded from Politico | A devastating scandal: Since 2019, the Dutch government has been embroiled in a scandal after the country’s tax authorities used a self-learning algorithm to create risk profiles in an effort to spot fraud among people applying for childcare benefits.
In what the Dutch have dubbed the “toeslagenaffaire,” authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators. Tens of thousands of families were pushed to poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care due to the scandal.
As governments around the world are turning to algorithms and AI to automate their systems, the Dutch scandal shows just how utterly devastating automated systems can be without the right safeguards. The European Union, which likes to think of itself as the world’s leading tech regulator, is working on a bill that aims to curb algorithmic harms. But critics say the bill tragically misses the mark and would fail to protect citizens from such cases.
‘This must be a mistake’: Chermaine Leysner’s life changed in 2012, when she received a letter from the tax authority demanding she pay back her childcare allowance since 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over €100,000. “I thought, ‘don’t worry, this is a big mistake.’ But it wasn’t a mistake. It was the start of something big,” she said.
The ordeal took nine years of Leysner’s life. The stress caused by the tax bill and her mother’s cancer diagnosis drove Leysner into depression and burnout. She ended up separating with her children’s father. “I was working like crazy so I could still do something for my children like give them some nice things to eat or buy candy. But I had times that my little boy had to go to school with a hole in his shoe,” Leysner said.
What happened: The Dutch system — which was launched in 2013 — was used to create risk profiles of people in an effort to weed out benefits fraud at an early stage. The criteria for the risk profile was developed by the tax authority, reports Trouw. Having dual nationality was a big risk indicator, as was a low income. The authorities then started claiming back benefits from families who were flagged by the system, without proof that they had committed such fraud.
Why Leysner ended up in the situation is unclear. One reason could be that she had twins, which meant she needed more support from the government. Leysner, who was born in the Netherlands, also has Surinamese roots.
Against the GDPR: In December 2021, the Dutch data protection agency fined the Dutch tax administration €2.75 million for the “unlawful, discriminatory and therefore improper manner” in which the tax authority processed data on the dual nationality of childcare benefit applicants.
That’s not all folks: The Dutch tax authorities have been hit with another scandal. In 2020, Dutch papers Trouw and RTL Nieuws revealed that the tax authorities kept secret blacklists on people for two decades, which tracked both credible and unsubstantiated “signals” of potential fraud. Citizens had no way of finding out why they are on the list or defend themselves. A spokesperson for the Dutch tax authority said an investigation into the blacklist will be ready in April.
Singling out dual nationalities: An audit showed that the tax authorities focused on people with “a non-Western appearance” and having Turkish or Moroccan nationality were a particular focus. Being on the blacklist also led to a higher risk score in the childcare benefits system.
The government vs. the people: A parliamentary report into the childcare benefits scandal found several grave shortcomings, including institutional bias and authorities hiding information or misleading the Parliament about the facts. Once the full scale of the scandal came to light, the Dutch government resigned, only to regroup 225 days later.
No checks or balances: “There was a total lack of checks and balances within every organization of making sure people realize what was going on,” said Pieter Omtzigt, an independent politician who played a pivotal role in uncovering the scandal and grilling the tax authorities. “What is really worrying me is that I’m not sure that we’ve taken even vaguely enough preventive measures to strengthen our institutions to handle the next derailment,” he continued.
A new algorithm regulator: The new government has pledged to create a new algorithm regulator under the country’s data protection authority. The new Dutch Digital Minister Alexandra van Huffelen — who was previously the finance minister in charge of the tax authority — told POLITICO that the authority’s role will be “to overlook the creation of algorithms and AI, but also how it plays out when it’s there, how it’s treated, make sure that is human-centered, and that it does apply to all the regulations that are in use.” The regulator will scrutinize algorithms in both the public and private sector.
Van Huffelen stressed the need to make sure humans are always in the loop. “What I find very important is to make sure that decisions, governmental decisions based on AI are also always treated afterwards by a human person,” she said.
Bringing the public sector to the 21st century: Europe’s top digital official Margrethe Vestager said the Dutch scandal is exactly what every government should be scared of. “We have huge public sectors in Europe. There are so many different services where decision-making supported by AI could be really useful, if you trust it,” Vestager told the European Parliament last week. The EU’s new AI Act is aimed at creating that trust, she argued, “so that this big public sector market will be open also for artificial intelligence.”
What the AI Act does: The European Commission’s proposal for the AI Act restricts the use of so-called high-risk AI systems, and bans certain “unacceptable” uses. Companies providing high-risk AI systems have to meet certain EU requirements. The AI Act also creates a public EU register of high-risk AI systems in an effort to improve transparency and help with enforcement.
That’s not good enough, argues Renske Leijten, a Dutch member of parliament belonging to the Socialist party. She is another key politician to uncover the true scale of the scandal. The AI Act should also target users of high risk AI systems in both the private and public sector. In the AI Act “we see that there are more guarantees for your rights when companies and private enterprises are working with AI. But the important thing we must learn out of the childcare benefit scandal is that this was not an enterprise or private sector… This was the government,” she said.
The European Parliament makes some noise: As it is now, the AI Act will not protect citizens from similar dangers, said Kim van Sparrentak (Greens), who is a Dutch member of the European Parliament’s negotiating team in the internal market committee. Van Sparrentak is pushing for the AI Act to have fundamental rights impact assessments that will also be published in the EU’s AI register. The European Parliament is also proposing making the users of high-risk AI systems more responsible, including in the public sector.
Bans a solution? “Fraud prediction and predictive policing based on profiling should just be banned. Because we have seen only very bad outcomes and not a single person can be determined based on some of their data,” Van Sparrentak said. In a report detailing how the Dutch government used ethnic profiling in the childcare benefits scandal, Amnesty International calls on governments to ban the “use of data on nationality and ethnicity when risk-scoring for law enforcement purposes in the search of potential crime or fraud suspects.”
It’s critical that Europe gets this right. The bloc wants to boost its AI sector while also protecting its citizens and “values.” Nothing says “bad look” like when one of its most tech-savvy countries screws up this badly.
Clothilde Goujard contributed reporting.