Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023
…
13 pages
1 file
Misinformation, online hate speech, mishandling of personal data, and other societal problems associated with the rise of digital communications and social media have led in recent years to growing concerns over threats to epistemic rights and the very foundations of democratic societies. These social harms stem, either directly or indirectly, from the operations of commercially run, for-profit internet companies that have grown massively in the last two decades accumulating unprecedented communication and economic power. It is against this backdrop that we are witnessing a 'turn to regulation in digital communication' (Flew & Wilding, 2021, p. 48) and a plethora of policy initiatives in many nations around the world aiming to curb the power of the largest digital platforms and offer regulatory remedies to the
In a digital context that is profoundly transforming social interactions in different domains and at different levels, the label "communication rights" (CRs) has emerged in recent years suggesting the need to better articulate the principles and rights pertaining to communication processes in society: principles and rights which should be recognized as guidelines to set normative standards of behavior in such a transformed communicative environment.
International Communication Gazette, 2019
The regulation of internet intermediaries such as Facebook and Google has drawn increasing academic, journalistic and political attention since the ‘fake news’ controversies following UK’s Brexit vote and Donald Trump’s election victory in 2016. This article examines the pressure for a new regulatory framework for the information intermediaries both within and outside the media industry, notably in Europe, noting that the range of issues thrown up by the operations of the information intermediaries now engage a wider focus than media policy per se, including data and privacy policy, national security, hate speech and other issues. The concept of ‘fake news’ emerges as only one of the drivers of policy change: the dominance of information intermediaries such as Facebook and Google in respect of the digital advertising market and data monopolisation may be even more significant. The article asks whether a new concept of ‘information utilities’ may be appropriate to capture their incre...
Information, Communication & Society, 2020
David Kaye UN Special Rapporteur on Freedom of Expression On the shelf beside my desk rest a number of recent and already dog-eared books about the digital age: Consent of the Networked by Rebecca MacKinnon, The Attention Merchants by Tim Wu, China's Contested Internet by Guobin Yang, Twitter and Tear Gas by Zeynep Tufekci, The Net Delusion by Evgeny Morozov, Weapons of Math Destruction by Cathy O'Neill, and Dragnet Nation by Julia Angwin. Stacked nearby are countless nongovernmental organization reports and academic studies about the ways in which the Internet is affecting the enjoyment of human rights, with titles like Tainted Leaks (Citizen Lab), Online and on All Fronts (Human Rights Watch), Let the Mob Do the Job (Association for Progressive Communications), Troops, Trolls and Troublemakers (Oxford Internet Institute), and ¿Quién defiende tus datos? (Derechos Digitales). What connects these disparate publications? Apart from all having a focus on the individual's experience in the digital age, not a single one tells a hopeful story about personal autonomy, freedom of expression, security, or privacy online. Not one of these publications highlights the ways in which the Internet has opened broad avenues of communication among cultures, permitted the sharing of information and ideas across borders, and offered vast expanses of knowledge that can be traversed from link to link and thread to thread online. Some of them focus on the repression of governments that criminalize expression online or conduct surveillance of their citizens and others. Some drill down into the ways in which private companies govern quasi-public space, share information with governments seeking access to their networks, or simply give the false impression of privacy or security in the shadow of what Peter Swire has called "The Golden Age of Surveillance." Are there stories about private actors expanding or simply protecting human rights? Of course, they do exist. Indeed, the story that dominated about twenty years of public discourse, from about 1990 to 2010, was the story of private innovation breaking through old barriers of distance to develop technologies that have created and then forever altered the "information society." Those stories are still told, ones about atheists in religious societies using the Internet for connection, sexual minorities going online to gain knowledge about health and well-being, and critics and dissenters using the tools of social media to share information and organize for protest. The truth is, the books on my shelf and the publications in my in-box reflect changes in the way most stakeholders now think about the Internet. According to an increasingly dominant narrative, the Internet is a place of darkness, danger, propaganda, misogyny, harassment, and incitement, which private actors are doing little to ameliorate. (Where do you read these complaints? On the Internet!) Worldwide, people are worried, legislators are energized, and the gears of regulation have been engaged. An era of Internet laissez-faire is over or at least coming to a close. To be sure, repressive governments have been imposing costs on private actors in digital space for many years, especially those actors-such as telecommunications and Internet service providers-subject to licensing rules as a condition of participation in a local market. Many perform a kind of regulation by denying entry into markets; blocking, filtering, or throttling digital traffic; providing beneficial network access to friends and limiting that access to critics; and performing other tricks of the digital censor. But the regulatory buzz is not limited to the repressive. Some rules-such as those pertaining to intellectual property, like the Digital Millennium Copyright Act in the United States-have been in place for decades, giving some private actors the power to shape in often very problematic ways the nature of expression and creation online. Recent years have shown deepening interest in regulation, as governments are eager to gain some measure of control over Internet space in an era of digital distress. European institutions are in the lead, developing regulatory models that may be replicated worldwide. The European Court of Justice has taken on personal reputational control with the right to be forgotten (or the right to erasure), outsourcing its implementation to Google. The European Court of Human Rights has danced around the possibility of intermediary liability for third-party expression. The United States and Europe have been in deep negotiations over the future of privacy ever since the collapse of the Safe Harbour standards in the treatment of personal data of Europeans. The European Union has imposed a code of conduct for social media companies and search engines to follow in the context of extremist and terrorist content, and it seems poised at the time of this writing to enter into the fraught space of disinformation and propaganda, so-called "fake" or "junk" news. Amid the calls for regulation in democratic societies and the acts of government repression elsewhere, there is one undeniable fact about the digital age: at the center is the private company. Whether it's the telco providing digital access, or the social media company providing space for conversation, or one of any number of other actors in sibling industries, private companies in the digital age exercise enormous control. They connect users and providers of information and ideas. They sell user data and user attention. They moderate (or regulate) user speech. They cooperate with or resist government demands. In short, they often are either the governors of space visited by billions or the mediators between the individual and the government. This is a massive role and, depending on how you see it, a vital responsibility. Just whose responsibility is subject to debate. This volume, a collection of studies by some of the leading thinkers at the nexus of private action and public regulation in the digital age, introduces the most difficult legal and policy questions of the digital age. It presents theoretical insights about the transformations brought about by private actors. It offers specific examples of private power that implicates the rights of individual users. It provides legal frameworks for all stakeholders to think through the problems of human rights protection in an environment so dominated by private companies. All of this the volume does without either the hysteria of the moment's particular crises or, at the other end of the spectrum, a jargony disconnection from the experience of real human beings. The real challenge for the next generation of legislators and regulators, particularly those of good faith operating in democratic societies, is to shape new laws that meet two conditions: First, at a minimum, they must promote and protect everyone's rights, such as the right to seek, receive, and impart information and ideas of all kinds, regardless of frontiers and through any media as provided by Article 19 of the International Covenant on Civil and Political Rights. They must be compliant with international human rights norms, protecting users who enjoy rights. Second, law must xiv Foreword protect users-and society as a whole-from the harms caused by the special features of the digital age. That is easier said than done, perhaps, but the preservation of the original vision of the Internet should be at the top of all stakeholders' agendas moving forward. This volume guides us toward that goal. tute for Human Rights in January 2017 to share their initial drafts and worked together to shape the overall direction of the book. A note of thanks goes to colleagues at the Danish Human Rights Institute, Marc Bagge Pedersen, Karen Lønne Christensen, and Emilia Piechowska, who have provided crucial practical and editorial assistance; and Anja Møller Pedersen for her contribution to the authors' workshop. I am also indebted to the MIT Press for their kind and professional assistance all the way from idea to final book, to the anonymous reviewers, and to series editor Sandra Braman for her support and substantive input ever since the idea of the book first materialized in 2016. I am grateful for the generous support from Knowledge Unlatched and the Danish Council for Independent Research, which enabled the open access edition of this book. It is my hope that it will benefit scholars and human rights practitioners around the globe. This book is concerned with the human rights implications of the "social web." 1 Companies such as Google, Facebook, Apple, Microsoft, Twitter, LinkedIn, and Yahoo! play an increasingly important role as managers of services and platforms that effectively shape the norms and boundaries for how users may form and express opinions, encounter information, debate, disagree, mobilize, and retain a sense of privacy. The technical affordances, user contracts, and governing practices of these services and platforms have significant consequences for the level of human rights protection, both in terms of the opportunities they offer and the potential harm they can cause. Whereas part of public life and discourse was also embedded in commercial structures in the pre-Internet era, the current situation is different in scope and character. The commercial press that is often referred to as the backbone of the Fourth Estate was supplemented by a broad range of civic activities and deliberations (Elkin-Koren and Weinstock Netanel 2002, vii). Moreover, in contrast to today's technology giants, the commercial press was guided by media law and relatively clear expectations as to the role of the press in society, meaning an explicit and regulated (although imperfect on many counts) role in relation to public deliberation and participation. In contrast to this, the platforms and services that make up the social web are based on the double logic of public participation and...
The paper examines how regulatory policies for new media are posing barriers to equitable and open access to digital information. In particular, it considers the relevance and impact of computer-mediated communication, its potential on democratization of freedom of expression and the requirements for the continuation of an unrestricted digital information environment. Broadband access has -in factbecome a prerequisite to satisfy a wide range of information needs. As a consequence, to enable communication and use of information across electronic networks it is also necessary to guarantee a regular and effective Internet access. Considering this scenario, the paper discusses and analyses the functional relationship between modern communication technologies and legislative reforms in the area of digital communications that threaten to reduce online freedoms.
The Political Quarterly, 2021
Communications
In retrospect, the communication world was so different in February 2020, when scholarly members of the Euromedia Research Group applied to become a Jean Monnet Network, focusing on media and platform policy (EuromediApp). Shortly after sending off the application, Covid-19 conquered the planet and jeopardized the main objective of networks, namely, to strengthen ties between network nodes. When the three-year network started operating in October 2020, it immediately became clear that dominant features of the pandemic would be fake news and harmful content online. Additionally, it was evident that digital platforms would play an even more central role in opinion-shaping during lockdowns than they had before. During the following three years, it turned out that the concept of the Eurome-diApp network was smart. Focusing on digital platforms, their relations to mass communication, and their performance regarding democracy and human rights allowed the network to organize cutting-edge workshops and conferences. For these events, it invited scholars to contribute scientific state-of-the-art texts and presentations on this fast-moving topic. This special issue of Communications serves to consolidate the learnings from that journey, timely addressing burning issues in digital platform governance. It explores questions such as how to limit hate speech and other harmful content online, how to hold digital platforms accountable for publishing it, how to accommodate automated decision-making (a.k.a. artificial intelligence), and how to economically balance platform profits achieved at the expense of mass media. Several attempts have been made over the last years to allow digital platform communication to thrive within the boundaries of the wider policy concept of
Journal of Information Policy, 2020
The rights-based perspective on ethical and political questions presented by the new digital media has recently regained attention in academic and political debates. As the recent report by the United Nations Secretary-General on digital cooperation notes, digital technologies do not only help to advocate, defend, and exercise human rights, but they are also used to suppress, limit, and violate human rights-and therefore, "the internet cannot be an ungoverned or ungoveranable space." 1 Yet, there is no consensus on what human rights in the digital realm are, and who should take the lead to govern them in the increasingly complex media and communications landscape. The more focused concept of communication rights, in turn, has already a varied history, starting with the attempts of the Global South in the 1970s to counter the Westernization of communication. 2 The connections between human rights and media policy have also been addressed throughout the past decades, especially in international contexts. 3 Communication rights have also been invoked with more specific aims, for instance, as cultural rights, the rights of disabled persons, or the rights of sexual minorities in today's communication environment. 4
Media and Communication, 2020
This article dives into the ongoing debate on how to address concerns of personal safety and respect online, as well as consequences for exposure to polarizing and in various ways harmful information, while at the same time safeguarding the democratic essentials of freedom of expression and participation. It does so by examining the issue from a less common angle, namely who governs the Internet and the platforms where much of the toxic material appears. By applying a model of free speech regulation conceptualized by legal scholar Jack Balkin (2018a, 2018b), the article explores different theoretical future scenarios of Internet governance involving three main players, namely governments, private companies, and speakers. The analysis finds that depending on which player is at the forefront, the outcomes from the standpoint of participation and freedom of speech may be drastically different. While there is potential for transformation that can enable more ownership, transparency, and...
2019
Social media platforms have become powerful enough to cause perceptible effects in societies on a global scale. They facilitate public discussion, and they work with excessive amounts of personal data-both activities affecting human rights and the rule of law. This specific service requires attention from the regulator: according to this paper, a new legal category should be created with clear definitions, and a firm delineation of platforms' rights and responsibilities. Social media companies should not become responsible for third-party content, as this would lead to over-censorship, but they should have the obligation to create and maintain safe and secure platforms, on which human rights and the rule of law are respected. In particular, they should maintain the transparency and viewpoint-neutrality of their services, as well as protect the personal data of their users. The paper sheds light on the similarities and differences from traditional media, and sets out detailed policy recommendations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The Political Quarterly, 2021
2019
Censorship from Plato to Social Media. The Complexity of Social Media’s Content Regulation and Moderation Practices, 2023
Alexander von Humboldt Institut für Internet und Gesellschaft, 2020
Information & …, 2009
Inmedia the French Journal of Media and Media Representations in the English Speaking World, 2013
Publication in event proceedings, 2015
András Koltay, New Media and Freedom of Expression: Rethinking the Constitutional Foundations of the Public Sphere (Book Review): Hart Publishing, Oxford, 2019, 224 p, ISBN 978-1-50991-649-8, 2020
Journal of Media Law & Ethics, 2022