Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer external link

2022

Abstract

Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.

accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency

Bibtex

Article{Irion2022b, title = {Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer}, author = {Irion, K.}, url = {https://www.ivir.nl/facct22-125-2/}, year = {0617}, date = {2022-06-17}, abstract = {Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.}, keywords = {accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency}, }

Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making external link

Helberger, N., Araujo, T. & Vreese, C.H. de
Computer Law & Security Review, vol. 39, 2020

Abstract

The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems. The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers. A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.

Artificial intelligence, automated decision making, fairness, frontpage, Technologie en recht

Bibtex

Article{Helberger2020f, title = {Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making}, author = {Helberger, N. and Araujo, T. and Vreese, C.H. de}, url = {https://www.sciencedirect.com/science/article/pii/S0267364920300613?dgcid=author}, doi = {https://doi.org/https://doi.org/10.1016/j.clsr.2020.105456}, year = {0915}, date = {2020-09-15}, journal = {Computer Law & Security Review}, volume = {39}, pages = {}, abstract = {The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems. The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers. A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.}, keywords = {Artificial intelligence, automated decision making, fairness, frontpage, Technologie en recht}, }

Diversity, Fairness, and Data-Driven Personalization in (News) Recommender System external link

Bernstein, A., Vreese, C.H. de, Helberger, N., Schulz, W. & Zweig, K.A.
Dagstuhl Reports, vol. 9, num: 11, pp: 117-124, 2020

Abstract

As people increasingly rely on online media and recommender systems to consume information, engage in debates and form their political opinions, the design goals of online media and news recommenders have wide implications for the political and social processes that take place online and offline. Current recommender systems have been observed to promote personalization and more effective forms of informing, but also to narrow the user’s exposure to diverse content. Concerns about echo-chambers and filter bubbles highlight the importance of design metrics that can successfully strike a balance between accurate recommendations that respond to individual information needs and preferences, while at the same time addressing concerns about missing out important information, context and the broader cultural and political diversity in the news, as well as fairness. A broader, more sophisticated vision of the future of personalized recommenders needs to be formed–a vision that can only be developed as the result of a collaborative effort by different areas of academic research (media studies, computer science, law and legal philosophy, communication science, political philosophy, and democratic theory). The proposed workshop will set first steps to develop such a much needed vision on the role of recommender systems on the democratic role of the media and define the guidelines as well as a manifesto for future research and long-term goals for the emerging topic of fairness, diversity, and personalization in recommender systems.

diversity, fairness, frontpage, Mediarecht, personalisatie, recommender systems

Bibtex

Article{Bernstein2020, title = {Diversity, Fairness, and Data-Driven Personalization in (News) Recommender System}, author = {Bernstein, A. and Vreese, C.H. de and Helberger, N. and Schulz, W. and Zweig, K.A.}, url = {https://www.ivir.nl/publicaties/download/dagrep_v009_i011_p117_19482.pdf}, doi = {https://doi.org/10.4230/DagRep.9.11.117}, year = {0402}, date = {2020-04-02}, journal = {Dagstuhl Reports}, volume = {9}, number = {11}, pages = {117-124}, abstract = {As people increasingly rely on online media and recommender systems to consume information, engage in debates and form their political opinions, the design goals of online media and news recommenders have wide implications for the political and social processes that take place online and offline. Current recommender systems have been observed to promote personalization and more effective forms of informing, but also to narrow the user’s exposure to diverse content. Concerns about echo-chambers and filter bubbles highlight the importance of design metrics that can successfully strike a balance between accurate recommendations that respond to individual information needs and preferences, while at the same time addressing concerns about missing out important information, context and the broader cultural and political diversity in the news, as well as fairness. A broader, more sophisticated vision of the future of personalized recommenders needs to be formed–a vision that can only be developed as the result of a collaborative effort by different areas of academic research (media studies, computer science, law and legal philosophy, communication science, political philosophy, and democratic theory). The proposed workshop will set first steps to develop such a much needed vision on the role of recommender systems on the democratic role of the media and define the guidelines as well as a manifesto for future research and long-term goals for the emerging topic of fairness, diversity, and personalization in recommender systems.}, keywords = {diversity, fairness, frontpage, Mediarecht, personalisatie, recommender systems}, }