Strengthening legal protection against discrimination by algorithms and artificial intelligence external link

The International Journal of Human Rights, 2020

Abstract

Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific – rather than general – rules, and outlines an approach to regulate algorithmic decision-making.

algoritmes, Artificial intelligence, discriminatie, frontpage, GDPR, Privacy

Bibtex

Article{Borgesius2020, title = {Strengthening legal protection against discrimination by algorithms and artificial intelligence}, author = {Zuiderveen Borgesius, F.}, url = {https://doi-org.proxy.uba.uva.nl:2443/10.1080/13642987.2020.1743976}, year = {0329}, date = {2020-03-29}, journal = {The International Journal of Human Rights}, abstract = {Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific – rather than general – rules, and outlines an approach to regulate algorithmic decision-making.}, keywords = {algoritmes, Artificial intelligence, discriminatie, frontpage, GDPR, Privacy}, }

Annotatie bij Hoge Raad 5 november 2019 en Hoge Raad 3 december 2019 external link

Nederlandse Jurisprudentie, num: 10, pp: 1368-1369, 2020

Annotaties, discriminatie, frontpage, Strafrecht, Vrijheid van meningsuiting

Bibtex

Article{Dommering2020d, title = {Annotatie bij Hoge Raad 5 november 2019 en Hoge Raad 3 december 2019}, author = {Dommering, E.}, url = {https://www.ivir.nl/publicaties/download/Annotatie_NJ_20120_72.pdf}, year = {0303}, date = {2020-03-03}, journal = {Nederlandse Jurisprudentie}, number = {10}, keywords = {Annotaties, discriminatie, frontpage, Strafrecht, Vrijheid van meningsuiting}, }

Discrimination, artificial intelligence, and algorithmic decision-making external link

vol. 2019, 2019

Abstract

This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.

ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten

Bibtex

Report{Borgesius2019, title = {Discrimination, artificial intelligence, and algorithmic decision-making}, author = {Zuiderveen Borgesius, F.}, url = {https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73}, year = {0208}, date = {2019-02-08}, volume = {2019}, pages = {}, abstract = {This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.}, keywords = {ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten}, }

Annotatie bij Rb. Den Haag 9 december 2016 (OM / Wilders) external link

Mediaforum, vol. 2017, num: 1, pp: 34-36, 2017

aanzetten tot haat, belediging, discriminatie, frontpage, Vrijheid van meningsuiting

Bibtex

Article{Nieuwenhuis2017b, title = {Annotatie bij Rb. Den Haag 9 december 2016 (OM / Wilders)}, author = {Nieuwenhuis, A.}, url = {https://www.ivir.nl/publicaties/download/Annotatie_Mediaforum_2017_1.pdf}, year = {0310}, date = {2017-03-10}, journal = {Mediaforum}, volume = {2017}, number = {1}, pages = {34-36}, keywords = {aanzetten tot haat, belediging, discriminatie, frontpage, Vrijheid van meningsuiting}, }