Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer external link

2022

Abstract

Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.

accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency

Bibtex

Article{Irion2022b, title = {Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer}, author = {Irion, K.}, url = {https://www.ivir.nl/facct22-125-2/}, year = {0617}, date = {2022-06-17}, abstract = {Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.}, keywords = {accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency}, }

The Algorithmic Learning Deficit: Artificial Intelligence, Data Protection and Trade external link

Big Data and Global Trade Law, Mira Burri (ed.) Cambridge University Press, 2021, 0210, pp: 212-230

algorithms, Artificial intelligence, frontpage, handelsrecht, Recht op gegevensbescherming

Bibtex

Chapter{nokey, title = {The Algorithmic Learning Deficit: Artificial Intelligence, Data Protection and Trade}, author = {Yakovleva, S. and van Hoboken, J.}, url = {https://www.ivir.nl/publicaties/download/the-algorithmic-learning-deficit.pdf}, doi = {https://doi.org/https://doi.org/10.1017/9781108919234.014}, year = {0210}, date = {2022-02-10}, keywords = {algorithms, Artificial intelligence, frontpage, handelsrecht, Recht op gegevensbescherming}, }

Personalised pricing: The demise of the fixed price? external link

Abstract

An online seller or platform is technically able to offer every consumer a different price for the same product, based on information it has about the customers. Such online price discrimination exacerbates concerns regarding the fairness and morality of price discrimination, and the possible need for regulation. In this chapter, we discuss the underlying basis of price discrimination in economic theory, and its popular perception. Our surveys show that consumers are critical and suspicious of online price discrimination. A majority consider it unacceptable and unfair, and are in favour of a ban. When stores apply online price discrimination, most consumers think they should be informed about it. We argue that the General Data Protection Regulation (GDPR) applies to the most controversial forms of online price discrimination, and not only requires companies to disclose their use of price discrimination, but also requires companies to ask customers for their prior consent. Industry practice, however, does not show any adoption of these two principles.

algorithms, frontpage, GDPR, gegevensbescherming, Personalisation, Price discrimination, Privacy

Bibtex

Article{Poort2021, title = {Personalised pricing: The demise of the fixed price?}, author = {Poort, J. and Zuiderveen Borgesius, F.}, url = {https://www.ivir.nl/publicaties/download/The-Demise-of-the-Fixed-Price.pdf}, year = {0304}, date = {2021-03-04}, abstract = {An online seller or platform is technically able to offer every consumer a different price for the same product, based on information it has about the customers. Such online price discrimination exacerbates concerns regarding the fairness and morality of price discrimination, and the possible need for regulation. In this chapter, we discuss the underlying basis of price discrimination in economic theory, and its popular perception. Our surveys show that consumers are critical and suspicious of online price discrimination. A majority consider it unacceptable and unfair, and are in favour of a ban. When stores apply online price discrimination, most consumers think they should be informed about it. We argue that the General Data Protection Regulation (GDPR) applies to the most controversial forms of online price discrimination, and not only requires companies to disclose their use of price discrimination, but also requires companies to ask customers for their prior consent. Industry practice, however, does not show any adoption of these two principles.}, keywords = {algorithms, frontpage, GDPR, gegevensbescherming, Personalisation, Price discrimination, Privacy}, }

Algorithmic systems: the consent is in the detail? external link

Internet Policy Review, vol. 9, num: 1, 2020

Abstract

Applications of algorithmically informed decisions are becoming entrenched in society, with data processing being their main process and ingredient. While these applications are progressively gaining momentum, established data protection and privacy rules have struggled to incorporate the particularities of data-intensive information societies. It is a truism to point out the resulting misalignment between algorithmic processing of personal data and the data protection regulatory frameworks that strive for meaningful control over personal data. However, the challenges to the (traditional) role and concept of consent are particularly manifest. This article examines the transformation of consent models in order to assess how the concept and the applied models of consent can be reconciled in order to correspond not only to the current regulatory landscapes but also to the exponential growth of algorithmic processing technologies. This particularly pressing area of safeguarding a basic aspect of individual control over personal data in the algorithmic era is interlinked with practical implementations of consent in the technology used and with adopted interpretations of the concept of consent, the scope of application of personal data, as well as the obligations enshrined in them. What makes consent effective as a data protection tool and how can we maintain its previous glory within the current technological challenges?

algorithms, consent, frontpage, Technologie en recht

Bibtex

Article{Giannopoulou2020, title = {Algorithmic systems: the consent is in the detail?}, author = {Giannopoulou, A.}, url = {https://policyreview.info/node/1452/pdf}, doi = {https://doi.org/10.14763/2020.1.1452}, year = {0324}, date = {2020-03-24}, journal = {Internet Policy Review}, volume = {9}, number = {1}, pages = {}, abstract = {Applications of algorithmically informed decisions are becoming entrenched in society, with data processing being their main process and ingredient. While these applications are progressively gaining momentum, established data protection and privacy rules have struggled to incorporate the particularities of data-intensive information societies. It is a truism to point out the resulting misalignment between algorithmic processing of personal data and the data protection regulatory frameworks that strive for meaningful control over personal data. However, the challenges to the (traditional) role and concept of consent are particularly manifest. This article examines the transformation of consent models in order to assess how the concept and the applied models of consent can be reconciled in order to correspond not only to the current regulatory landscapes but also to the exponential growth of algorithmic processing technologies. This particularly pressing area of safeguarding a basic aspect of individual control over personal data in the algorithmic era is interlinked with practical implementations of consent in the technology used and with adopted interpretations of the concept of consent, the scope of application of personal data, as well as the obligations enshrined in them. What makes consent effective as a data protection tool and how can we maintain its previous glory within the current technological challenges?}, keywords = {algorithms, consent, frontpage, Technologie en recht}, }

The Netherlands in ‘Automating Society – Taking Stock of Automated Decision-Making in the EU’ external link

pp: 93-102, 2019

Abstract

Systems for automated decision-making or decision support (ADM) are on the rise in EU countries: Profiling job applicants based on their personal emails in Finland, allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications has broadened to almost all aspects of daily life. This begs a lot of questions: Do we need new laws? Do we need new oversight institutions? Who do we fund to develop answers to the challenges ahead? Where should we invest? How do we enable citizens – patients, employees, consumers – to deal with this? For the report “Automating Society – Taking Stock of Automated Decision-Making in the EU”, our experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. We assessed not only the political discussions and initiatives in these countries but also present a section “ADM in Action” for all states, listing examples of automated decision-making already in use. This is the first time a comprehensive study has been done on the state of automated decision-making in Europe.

algorithms, algoritmes, Artificial intelligence, EU, frontpage, kunstmatige intelligentie, NGO

Bibtex

Report{Til2019, title = {The Netherlands in ‘Automating Society – Taking Stock of Automated Decision-Making in the EU’}, author = {Til, G. van}, url = {https://www.ivir.nl/automating_society_report_2019/}, year = {0211}, date = {2019-02-11}, abstract = {Systems for automated decision-making or decision support (ADM) are on the rise in EU countries: Profiling job applicants based on their personal emails in Finland, allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications has broadened to almost all aspects of daily life. This begs a lot of questions: Do we need new laws? Do we need new oversight institutions? Who do we fund to develop answers to the challenges ahead? Where should we invest? How do we enable citizens – patients, employees, consumers – to deal with this? For the report “Automating Society – Taking Stock of Automated Decision-Making in the EU”, our experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. We assessed not only the political discussions and initiatives in these countries but also present a section “ADM in Action” for all states, listing examples of automated decision-making already in use. This is the first time a comprehensive study has been done on the state of automated decision-making in Europe.}, keywords = {algorithms, algoritmes, Artificial intelligence, EU, frontpage, kunstmatige intelligentie, NGO}, }