How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization external link

Senftleben, M., Quintais, J. & Meiring, A.
Berkeley Technology Law Journal, vol. 38, iss. : 3, pp: 933-1010, 2024

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content

Bibtex

Article{nokey, title = {How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {2024}, date = {2024-01-23}, journal = {Berkeley Technology Law Journal}, volume = {38}, issue = {3}, pages = {933-1010}, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content}, }

The right to encryption: Privacy as preventing unlawful access external link

Computer Law & Security Review, vol. 49, 2023

Abstract

Encryption technologies are a fundamental building block of modern digital infrastructure, but plans to curb these technologies continue to spring up. Even in the European Union, where their application is by now firmly embedded in legislation, lawmakers are again calling for measures which would impact these technologies. One of the most important arguments in this debate are human rights, most notably the rights to privacy and to freedom of expression. And although some authors have in the past explored how encryption technologies support human rights, this connection is not yet firmly grounded in an analysis of European human rights case law. This contribution aims to fill this gap, developing a framework for assessing restrictions of encryption technologies under the rights to privacy and freedom of expression as protected under the European Convention of Human Rights (the Convention) and the Charter of Fundamental rights in the European Union (the Charter). In the first section, the relevant function of encryption technologies, restricting access to information (called confidentiality), is discussed. In the second section, an overview of some governmental policies and practices impacting these technologies is provided. This continues with a discussion of the case law on the rights to privacy, data protection and freedom of expression, arguing that these rights are not only about ensuring lawful access by governments to protected information, but also about preventing unlawful access by others. And because encryption technologies are an important technology to reduce the risk of this unlawful access, it is then proposed that this risk is central to the assessment of governance measures in the field of encryption technologies. The article concludes by recommending that states perform an in-depth assessement of this when proposing new measures, and that courts when reviewing them also place the risk of unlawful access central to the analysis of interference and proportionality.

communications confidentiality, encryption, Freedom of expression, Human rights, Privacy, unlawful access

Bibtex

Article{nokey, title = {The right to encryption: Privacy as preventing unlawful access}, author = {van Daalen, O.}, url = {https://www.sciencedirect.com/science/article/pii/S0267364923000146}, doi = {https://doi.org/10.1016/j.clsr.2023.105804}, year = {2023}, date = {2023-05-23}, journal = {Computer Law & Security Review}, volume = {49}, pages = {}, abstract = {Encryption technologies are a fundamental building block of modern digital infrastructure, but plans to curb these technologies continue to spring up. Even in the European Union, where their application is by now firmly embedded in legislation, lawmakers are again calling for measures which would impact these technologies. One of the most important arguments in this debate are human rights, most notably the rights to privacy and to freedom of expression. And although some authors have in the past explored how encryption technologies support human rights, this connection is not yet firmly grounded in an analysis of European human rights case law. This contribution aims to fill this gap, developing a framework for assessing restrictions of encryption technologies under the rights to privacy and freedom of expression as protected under the European Convention of Human Rights (the Convention) and the Charter of Fundamental rights in the European Union (the Charter). In the first section, the relevant function of encryption technologies, restricting access to information (called confidentiality), is discussed. In the second section, an overview of some governmental policies and practices impacting these technologies is provided. This continues with a discussion of the case law on the rights to privacy, data protection and freedom of expression, arguing that these rights are not only about ensuring lawful access by governments to protected information, but also about preventing unlawful access by others. And because encryption technologies are an important technology to reduce the risk of this unlawful access, it is then proposed that this risk is central to the assessment of governance measures in the field of encryption technologies. The article concludes by recommending that states perform an in-depth assessement of this when proposing new measures, and that courts when reviewing them also place the risk of unlawful access central to the analysis of interference and proportionality.}, keywords = {communications confidentiality, encryption, Freedom of expression, Human rights, Privacy, unlawful access}, }

Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act external link

Senftleben, M., Quintais, J. & Meiring, A.

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content

Bibtex

Online publication{nokey, title = {Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {}, date = {DATE ERROR: pub_date = }, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content}, }

Export control of cybersurveillance items in the new dual-use regulation: The challenges of applying human rights logic to export control external link

Computer Law & Security Review, vol. 48, 2023

Abstract

In 2021, the Recast Dual-Use Regulation entered into force. The regulation includes a heavily debated new provision on the export control of so-called cybersurveillance items. This provision departs from the traditional logic of export control rules in multiple ways. Most importantly, it positions human rights considerations as an important factor in the export control of a flexible range of technologies. This article explores the operation, implications and challenges of this new human rights-orientated approach to export control of digital surveillance technologies. Taking the definition of cybersurveillance items as a starting point of the analysis, the article draws on surveillance-related case law of the European Court of Human Rights and the Court of Justice of the European Union, to define the potential scope of application of the open-ended cybersurveillance concept of the Regulation. By exploring how this concept maps to technologies often connected with human rights infringements, such as facial recognition, location tracking and open-source intelligence, the article highlights the challenges of applying this new approach and underscores the need for its further development in practice.

cybersurveillance, Human rights, Regulation

Bibtex

Article{nokey, title = {Export control of cybersurveillance items in the new dual-use regulation: The challenges of applying human rights logic to export control}, author = {van Daalen, O. and van Hoboken, J. and Rucz, M.}, doi = {https://doi.org/10.1016/j.clsr.2022.105789}, year = {2023}, date = {2023-04-21}, journal = {Computer Law & Security Review}, volume = {48}, pages = {}, abstract = {In 2021, the Recast Dual-Use Regulation entered into force. The regulation includes a heavily debated new provision on the export control of so-called cybersurveillance items. This provision departs from the traditional logic of export control rules in multiple ways. Most importantly, it positions human rights considerations as an important factor in the export control of a flexible range of technologies. This article explores the operation, implications and challenges of this new human rights-orientated approach to export control of digital surveillance technologies. Taking the definition of cybersurveillance items as a starting point of the analysis, the article draws on surveillance-related case law of the European Court of Human Rights and the Court of Justice of the European Union, to define the potential scope of application of the open-ended cybersurveillance concept of the Regulation. By exploring how this concept maps to technologies often connected with human rights infringements, such as facial recognition, location tracking and open-source intelligence, the article highlights the challenges of applying this new approach and underscores the need for its further development in practice.}, keywords = {cybersurveillance, Human rights, Regulation}, }

Governing “European values” inside data flows: : interdisciplinary perspectives external link

Irion, K., Kolk, A., Buri, M. & Milan, S.
Internet Policy Review, vol. 10, num: 3, 2021

Abstract

This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows. Protecting fundamental human rights and critical public interests that undergird European societies in a global digital ecosystem poses complex challenges, especially because the United States and China are leading in novel technologies. We envision a research agenda calling upon different disciplines to further identify and understand European values that can adequately perform under conditions of transnational data flows.

Artificial intelligence, Data flows, Data governance, Digital connectivity, European Union, European values, Human rights, Internet governance, Personal data protection, Public policy, Societal values

Bibtex

Article{Irion2021e, title = {Governing “European values” inside data flows: : interdisciplinary perspectives}, author = {Irion, K. and Kolk, A. and Buri, M. and Milan, S.}, url = {https://policyreview.info/european-values}, doi = {https://doi.org/10.14763/2021.3.1582}, year = {1011}, date = {2021-10-11}, journal = {Internet Policy Review}, volume = {10}, number = {3}, pages = {}, abstract = {This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows. Protecting fundamental human rights and critical public interests that undergird European societies in a global digital ecosystem poses complex challenges, especially because the United States and China are leading in novel technologies. We envision a research agenda calling upon different disciplines to further identify and understand European values that can adequately perform under conditions of transnational data flows.}, keywords = {Artificial intelligence, Data flows, Data governance, Digital connectivity, European Union, European values, Human rights, Internet governance, Personal data protection, Public policy, Societal values}, }

Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows external link

Big Data and Global Trade Law, Cambridge University Press, 2021

Abstract

Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.

Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law

Bibtex

Chapter{Irion2021bb, title = {Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows}, author = {Irion, K.}, url = {https://www.cambridge.org/core/books/big-data-and-global-trade-law/panta-rhei/B0E5D7851240E0D2F4562B3C6DFF3011}, doi = {https://doi.org/https://doi.org/10.1017/9781108919234.015}, year = {2021}, date = {2021-07-05}, abstract = {Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.}, keywords = {Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law}, }

Panta rhei: A European Perspective on Ensuring a High-Level of Protection of Digital Human Rights in a World in Which Everything Flows external link

Amsterdam Law School Research Paper No. 2020, num: 38, 2020

Artificial intelligence, data flow, EU law, Human rights, WTO law

Bibtex

Article{Irion2020d, title = {Panta rhei: A European Perspective on Ensuring a High-Level of Protection of Digital Human Rights in a World in Which Everything Flows}, author = {Irion, K.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3638864}, year = {2020}, date = {2020-11-30}, journal = {Amsterdam Law School Research Paper No. 2020}, number = {38}, keywords = {Artificial intelligence, data flow, EU law, Human rights, WTO law}, }

From Flexible Balancing Tool to Quasi-Constitutional Straitjacket – How the EU Cultivates the Constraining Function of the Three-Step Test external link

Abstract

In the international intellectual property (IP) arena, the so-called “three-step test” regulates the room for the adoption of limitations and exceptions (L&Es) to exclusive rights across different fields of IP. Given the openness of the individual test criteria, it is tempting for proponents of strong IP protection to strive for the fixation of the meaning of the three-step test at the constraining end of the spectrum of possible interpretations. As the three-step test lies at the core of legislative initiatives to balance exclusive rights and user freedoms, the cultivation of the test’s constraining function and the suppression of the test’s enabling function has the potential to transform the three-step test into a bulwark against limitations of IP protection. The EU is at the forefront of a constraining use and interpretation of the three-step test in the field of copyright law. The configuration of the legal framework in the EU is worrisome because it obliges judges to apply the three-step test as an additional control instrument. It is not sufficient that an individual use falls within the scope of a statutory copyright limitation that explicitly permits this type of use without prior authorization. In addition, judges applying the three-step test also examine whether the specific form of use at issue complies with each individual criterion of the three-step test. Hence, the test serves as an instrument to further restrict L&Es that have already been defined precisely in statutory law. Not surprisingly, decisions from courts in the EU have a tendency of shedding light on the constraining aspect of the three-step test and, therefore, reinforcing the hegemony of copyright holders in the IP arena. The hypothesis underlying the following examination, therefore, is that the EU approach to the three-step test is one-sided in the sense that it only demonstrates the potential of the test to set additional limits to L&Es. The analysis focuses on this transformation of a flexible international balancing tool into a powerful confirmation and fortification of IP protection. For this purpose, the two facets of the international three-step test – its enabling and constraining function – are explored before embarking on a discussion of case law that evolved under the one-sided EU approach. Analyzing repercussions on international lawmaking, it will become apparent that the EU approach already impacted the further development of international L&Es. Certain features of the Marrakesh Treaty clearly reflect the restrictive EU approach.

access to knowledge, Berne Convention, Copyright, EU law, frontpage, Human rights, limitations and exceptions, Marrakesh Treaty, rights of disabled persons, transformative use, TRIPS Agreement

Bibtex

Chapter{Senftleben2020b, title = {From Flexible Balancing Tool to Quasi-Constitutional Straitjacket – How the EU Cultivates the Constraining Function of the Three-Step Test}, author = {Senftleben, M.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3576019}, year = {0416}, date = {2020-04-16}, abstract = {In the international intellectual property (IP) arena, the so-called “three-step test” regulates the room for the adoption of limitations and exceptions (L&Es) to exclusive rights across different fields of IP. Given the openness of the individual test criteria, it is tempting for proponents of strong IP protection to strive for the fixation of the meaning of the three-step test at the constraining end of the spectrum of possible interpretations. As the three-step test lies at the core of legislative initiatives to balance exclusive rights and user freedoms, the cultivation of the test’s constraining function and the suppression of the test’s enabling function has the potential to transform the three-step test into a bulwark against limitations of IP protection. The EU is at the forefront of a constraining use and interpretation of the three-step test in the field of copyright law. The configuration of the legal framework in the EU is worrisome because it obliges judges to apply the three-step test as an additional control instrument. It is not sufficient that an individual use falls within the scope of a statutory copyright limitation that explicitly permits this type of use without prior authorization. In addition, judges applying the three-step test also examine whether the specific form of use at issue complies with each individual criterion of the three-step test. Hence, the test serves as an instrument to further restrict L&Es that have already been defined precisely in statutory law. Not surprisingly, decisions from courts in the EU have a tendency of shedding light on the constraining aspect of the three-step test and, therefore, reinforcing the hegemony of copyright holders in the IP arena. The hypothesis underlying the following examination, therefore, is that the EU approach to the three-step test is one-sided in the sense that it only demonstrates the potential of the test to set additional limits to L&Es. The analysis focuses on this transformation of a flexible international balancing tool into a powerful confirmation and fortification of IP protection. For this purpose, the two facets of the international three-step test – its enabling and constraining function – are explored before embarking on a discussion of case law that evolved under the one-sided EU approach. Analyzing repercussions on international lawmaking, it will become apparent that the EU approach already impacted the further development of international L&Es. Certain features of the Marrakesh Treaty clearly reflect the restrictive EU approach.}, keywords = {access to knowledge, Berne Convention, Copyright, EU law, frontpage, Human rights, limitations and exceptions, Marrakesh Treaty, rights of disabled persons, transformative use, TRIPS Agreement}, }

Prospective Policy Study on Artificial Intelligence and EU Trade Policy external link

Irion, K. & Williams, J.
2020

Abstract

Artificial intelligence is poised to be 21st century’s most transformative general purpose technology that mankind ever availed itself of. Artificial intelligence is a catch-all for technologies that can carry out complex processes fairly independently by learning from data. In the form of popular digital services and products, applied artificial intelligence is seeping into our daily lives, for example, as personal digital assistants or as autopiloting of self-driving cars. This is just the beginning of a development over the course of which artificial intelligence will generate transformative products and services that will alter world trade patterns. Artificial intelligence holds enormous promise for our information civilization if we get the governance of artificial intelligence right. What makes artificial intelligence even more fascinating is that the technology can be deployed fairly location-independent. Cross-border trade in digital services which incorporate applied artificial intelligence into their software architecture is ever increasing. That brings artificial intelligence within the purview of international trade law, such as the General Agreement on Trade in Services (GATS) and ongoing negotiations at the World Trade Organization (WTO) on trade related aspects of electronic commerce. The Dutch Ministry of Foreign Affairs commissioned this study to generate knowledge about the interface between international trade law and European norms and values in the use of artificial intelligence.

Artificial intelligence, EU law, Human rights, Transparency, WTO law

Bibtex

Report{Irion2020b, title = {Prospective Policy Study on Artificial Intelligence and EU Trade Policy}, author = {Irion, K. and Williams, J.}, url = {https://www.ivir.nl/ivir_policy-paper_ai-study_online/https://www.ivir.nl/ivir_artificial-intelligence-and-eu-trade-policy-2/}, year = {2020}, date = {2020-01-21}, abstract = {Artificial intelligence is poised to be 21st century’s most transformative general purpose technology that mankind ever availed itself of. Artificial intelligence is a catch-all for technologies that can carry out complex processes fairly independently by learning from data. In the form of popular digital services and products, applied artificial intelligence is seeping into our daily lives, for example, as personal digital assistants or as autopiloting of self-driving cars. This is just the beginning of a development over the course of which artificial intelligence will generate transformative products and services that will alter world trade patterns. Artificial intelligence holds enormous promise for our information civilization if we get the governance of artificial intelligence right. What makes artificial intelligence even more fascinating is that the technology can be deployed fairly location-independent. Cross-border trade in digital services which incorporate applied artificial intelligence into their software architecture is ever increasing. That brings artificial intelligence within the purview of international trade law, such as the General Agreement on Trade in Services (GATS) and ongoing negotiations at the World Trade Organization (WTO) on trade related aspects of electronic commerce. The Dutch Ministry of Foreign Affairs commissioned this study to generate knowledge about the interface between international trade law and European norms and values in the use of artificial intelligence.}, keywords = {Artificial intelligence, EU law, Human rights, Transparency, WTO law}, }

“Fake news”: False fears or real concerns? external link

Netherlands Quarterly of Human Rights, vol. 35, num: 4, pp: 203-209, 2017

Abstract

‘‘Fake news’’ has become a much-used and much-hyped term in the so-called ‘‘post-truth’’ era that we now live in. It is also much-maligned: it is often blamed for having a disruptive impact on the outcomes of elections and referenda and for skewing democratic public debate, with the 2016 US Presidential elections and Brexit referendum often cited as examples. ‘‘Fake news’’ has also been flagged for fuelling propaganda and ‘‘hate speech’’ and even violence. ‘‘Pizzagate’’ is an infamous example of exceptional circumstances in which a false news story had a central role in a shooting incident. In December 2016, a man in Washington D.C. took it upon himself to ‘‘self-investigate’’ a story (a completely unfounded conspiracy theory) that the Hillary Clinton campaign team was running a paedophile ring from the premises of a pizzeria. Shots were fired and he was arrested and charged with assault and related offences. Given all this bad press, it is perhaps little wonder that ‘‘fake news’’ has become a major preoccupation for international organisations, national law- and policy-makers, the media and media actors, civil society and academia. But what exactly is ‘‘fake news’’ and what is all the fuss about? In addressing these questions, this column will also consider historical and contemporary perspectives on the term and its relationship with human rights.

Fake news, frontpage, Human rights, Journalistiek, Mediarecht, post-truth era

Bibtex

Article{McGonagle2017h, title = {“Fake news”: False fears or real concerns?}, author = {McGonagle, T.}, url = {http://journals.sagepub.com/doi/full/10.1177/0924051917738685}, doi = {https://doi.org/https://doi.org/10.1177/0924051917738685}, year = {1205}, date = {2017-12-05}, journal = {Netherlands Quarterly of Human Rights}, volume = {35}, number = {4}, pages = {203-209}, abstract = {‘‘Fake news’’ has become a much-used and much-hyped term in the so-called ‘‘post-truth’’ era that we now live in. It is also much-maligned: it is often blamed for having a disruptive impact on the outcomes of elections and referenda and for skewing democratic public debate, with the 2016 US Presidential elections and Brexit referendum often cited as examples. ‘‘Fake news’’ has also been flagged for fuelling propaganda and ‘‘hate speech’’ and even violence. ‘‘Pizzagate’’ is an infamous example of exceptional circumstances in which a false news story had a central role in a shooting incident. In December 2016, a man in Washington D.C. took it upon himself to ‘‘self-investigate’’ a story (a completely unfounded conspiracy theory) that the Hillary Clinton campaign team was running a paedophile ring from the premises of a pizzeria. Shots were fired and he was arrested and charged with assault and related offences. Given all this bad press, it is perhaps little wonder that ‘‘fake news’’ has become a major preoccupation for international organisations, national law- and policy-makers, the media and media actors, civil society and academia. But what exactly is ‘‘fake news’’ and what is all the fuss about? In addressing these questions, this column will also consider historical and contemporary perspectives on the term and its relationship with human rights.}, keywords = {Fake news, frontpage, Human rights, Journalistiek, Mediarecht, post-truth era}, }