Auteursrecht en artificiële creatie external link

Auteursrecht, num: 2, pp: 47-52, 2021

Abstract

In dit artikel wordt de vraag gesteld of voortbrengselen die met behulp van AI-systemen tot stand zijn gebracht auteursrechtelijk beschermd kunnen zijn. Centraal in deze analyse staat niet de machine, maar de rol van de mens in het door het AI-systeem ondersteunde creatieve proces. Is deze rol voldoende om het resultaat als auteursrechtelijke beschermd werk te kwalificeren? En wie heeft in dat geval te gelden als maker(s)? Deze vragen worden aan de hand van het Unierecht en de jurisprudentie van het HvJ EU beantwoord. Dit artikel is gebaseerd op een studie die in opdracht van de Europese Commissie is verricht en aan de basis ligt van het door de Commissie in het Actieplan IE geformuleerde beleidsstandpunt over AI-creaties.

Auteursrecht, creaties, frontpage, kunstmatige intelligentie

Bibtex

Article{Hugenholtz2021b, title = {Auteursrecht en artificiële creatie}, author = {Hugenholtz, P. and Quintais, J.}, url = {https://www.ivir.nl/publicaties/download/Auteursrecht-2021-2.pdf}, year = {0617}, date = {2021-06-17}, journal = {Auteursrecht}, number = {2}, abstract = {In dit artikel wordt de vraag gesteld of voortbrengselen die met behulp van AI-systemen tot stand zijn gebracht auteursrechtelijk beschermd kunnen zijn. Centraal in deze analyse staat niet de machine, maar de rol van de mens in het door het AI-systeem ondersteunde creatieve proces. Is deze rol voldoende om het resultaat als auteursrechtelijke beschermd werk te kwalificeren? En wie heeft in dat geval te gelden als maker(s)? Deze vragen worden aan de hand van het Unierecht en de jurisprudentie van het HvJ EU beantwoord. Dit artikel is gebaseerd op een studie die in opdracht van de Europese Commissie is verricht en aan de basis ligt van het door de Commissie in het Actieplan IE geformuleerde beleidsstandpunt over AI-creaties.}, keywords = {Auteursrecht, creaties, frontpage, kunstmatige intelligentie}, }

De kunstmatige maker: over de gevolgen van het Endstra-arrest voor de bescherming van artificiële creaties external link

Intellectuele Eigendom & Reclamerecht (IER), num: 5, pp: 276-280, 2020

Auteursrecht, creaties, frontpage, kunstmatige intelligentie, makers

Bibtex

Article{Hugenholtz2020d, title = {De kunstmatige maker: over de gevolgen van het Endstra-arrest voor de bescherming van artificiële creaties}, author = {Hugenholtz, P.}, url = {https://www.ivir.nl/publicaties/download/IER_2020_5.pdf}, year = {1001}, date = {2020-10-01}, journal = {Intellectuele Eigendom & Reclamerecht (IER)}, number = {5}, keywords = {Auteursrecht, creaties, frontpage, kunstmatige intelligentie, makers}, }

The Netherlands in ‘Automating Society – Taking Stock of Automated Decision-Making in the EU’ external link

pp: 93-102, 2019

Abstract

Systems for automated decision-making or decision support (ADM) are on the rise in EU countries: Profiling job applicants based on their personal emails in Finland, allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications has broadened to almost all aspects of daily life. This begs a lot of questions: Do we need new laws? Do we need new oversight institutions? Who do we fund to develop answers to the challenges ahead? Where should we invest? How do we enable citizens – patients, employees, consumers – to deal with this? For the report “Automating Society – Taking Stock of Automated Decision-Making in the EU”, our experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. We assessed not only the political discussions and initiatives in these countries but also present a section “ADM in Action” for all states, listing examples of automated decision-making already in use. This is the first time a comprehensive study has been done on the state of automated decision-making in Europe.

algorithms, algoritmes, Artificial intelligence, EU, frontpage, kunstmatige intelligentie, NGO

Bibtex

Report{Til2019, title = {The Netherlands in ‘Automating Society – Taking Stock of Automated Decision-Making in the EU’}, author = {Til, G. van}, url = {https://www.ivir.nl/automating_society_report_2019/}, year = {0211}, date = {2019-02-11}, abstract = {Systems for automated decision-making or decision support (ADM) are on the rise in EU countries: Profiling job applicants based on their personal emails in Finland, allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications has broadened to almost all aspects of daily life. This begs a lot of questions: Do we need new laws? Do we need new oversight institutions? Who do we fund to develop answers to the challenges ahead? Where should we invest? How do we enable citizens – patients, employees, consumers – to deal with this? For the report “Automating Society – Taking Stock of Automated Decision-Making in the EU”, our experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. We assessed not only the political discussions and initiatives in these countries but also present a section “ADM in Action” for all states, listing examples of automated decision-making already in use. This is the first time a comprehensive study has been done on the state of automated decision-making in Europe.}, keywords = {algorithms, algoritmes, Artificial intelligence, EU, frontpage, kunstmatige intelligentie, NGO}, }

Discrimination, artificial intelligence, and algorithmic decision-making external link

vol. 2019, 2019

Abstract

This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.

ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten

Bibtex

Report{Borgesius2019, title = {Discrimination, artificial intelligence, and algorithmic decision-making}, author = {Zuiderveen Borgesius, F.}, url = {https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73}, year = {0208}, date = {2019-02-08}, volume = {2019}, pages = {}, abstract = {This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.}, keywords = {ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten}, }