
Gilad Abiri
Assistant Professor of Law at Peking University School of Transnational Law - teaching Constitutional Law, Data Privacy and Law and Technology
less
Related Authors
Andrej Dujella
University of Zagreb
Hemin Koyi
Uppsala University
Jana Javornik
University of East London
Graham Martin
University of Leicester
Gwen Robbins Schug
University of North Carolina at Greensboro
Gabriel Gutierrez-Alonso
University of Salamanca
John Sutton
Macquarie University
Eros Carvalho
Universidade Federal do Rio Grande do Sul
Kevin Arbuckle
Swansea University
Jesper Hoffmeyer
University of Copenhagen
Uploads
Papers by Gilad Abiri
feeds and search engine results, are the prism through which we acquire
information in our digital age. Critics ascribe many social and political
woes—such as the prevalence of misinformation and political
division—to the fact that we view our world through the personalized
and atomized prism of recommendation artificial intelligence. The way
the great powers of the internet—the United States, the European
Union, and China—choose to regulate recommendation algorithms will
undoubtedly have a serious impact on our lives and political well-being.
On December 31, 2021, the Cyberspace Administration of China,
a governmental internet watchdog, published a bombshell regulation
directed at recommendation algorithms. These regulations, which came
into effect in March 2022, exponentially increase the control and
autonomy of Chinese netizens over their digital life. At the same time,
the regulation will greatly increase the control the Chinese government
has over these algorithms. In this timely essay, we analyze the content
of the regulation and situate it in its historical and political context.
The first set of legitimation strategies mimics legal institutions such as bureaucracies and constitutional courts. These attempts fundamentally misunderstand the reason why law is legitimated in modern societies. Platforms seem to think that merely adopting legal symbolism and forms can provide legitimation on its own. However, law in modern societies is legitimated not only through procedural and formal justice, but also because it exists in the context of a state and is perceived as authored by the political community. By stressing the how of law, platforms miss the fundamental question: Why should we allow Mark Zuckerberg, Bytedance, or the Twitter board to possess such incredible power over the digital public sphere?
The second set of legitimation strategies focuses on mimicking powerful non-legal organizations such as large tech firms and mass media outlets. These attempts fail in a different way. By echoing the arguments of corporations and civil society organizations, platforms do attempt to provide an answer to why they should exert power over the public sphere. However, these answers are fundamentally flawed: Social media platforms are too public to be fully private and too concerned with profit to be believed to act in the public interest.
Thus, social media platforms are currently unable to resolve their legitimation crisis. However, it is unlikely that they are going to disappear: An alternative to a public sphere without legitimate platforms is not a future without platforms but a future with delegitimated, tyrannical ones. We believe that the failures described in this Article reveal that successfully providing a reason as to why platform power is legitimate will require a significant change in the way social media platforms operate, conduct their
business, and ultimately conceive of themselves.
In this Article, we argue that the reason for the ineffectiveness of truth-based solutions—such as fact-checking— is that they do not reach the heart of the problem. Both scholars and policymakers share the implicit or explicit belief that the rise of digital fake news is harmful mainly because it spreads false information, which lays a rotten groundwork for both individual decisions and collective policy making. While acknowledging the importance of accurate information, we argue that the main problem with fake news is not that it is false. Instead, what is distinctly threatening about digital misinformation is its ability to circumvent and undermine common knowledge-producing institutions including the sciences, courts, medical and other professions, and the media. The fundamental challenge is the fragmentation of our societies into separate epistemic communities. This shakes the factual common ground on which we stand. What does fact-checking matter if twenty percent of the population thinks that the fact-checkers are chronic liars? We call this new reality the Digital Epistemic Divide.
Epistemic fragmentation of society is both more fundamental and more dangerous than the harms of false information as such. It is more fundamental because once a society is epistemically fragmented, the lack of trust in common epistemic authorities will inevitably proliferate disagreement over factual beliefs. It is more dangerous because it can exacerbate political polarization. It is one thing to believe that the other side of a political issue holds wrong values and preferences; it is quite another to believe that they are either constantly lying or deeply manipulated.
To bridge the digital epistemic divide, we must go beyond truth-based solutions and implement policies to reconstitute societal trust in common epistemic authorities.
Until recently, American free speech norms have dominated the content moderation policies of digital media platforms. First Amendment norms are extremely resistant to censorship and therefore very protective of offensive and hateful speech. However, in recent years, this influence has been gradually eroded by what could be called European free speech norms, which are significantly more comfortable with directly regulating speech to try to prevent social and political harm. The epitome of the European approach is Germany s Network Enforcement Act (NetzDG), which requires platforms to enforce domestic hate speech laws within that country s borders. This general transformation, and NetzDG especially, have been met by nearly unanimous rebuke by digital free speech scholars, who argue that such measures might steer the platforms into creating a public sphere in which speech is stunted, and the values of free speech are not upheld.
While acknowledging (to some extent, at least) the strength of these critiques, this Article argues that they may well be outweighed by how laws like NetzDG respond effectively to one of the major challenges of the new digital platform public sphere: its detachment from civil society and the public discourse of particular democratic societies. Digital platforms are moderating the digital public sphere from nowhere. This disconnection between the new information gatekeepers (the platforms) and the circumstances and needs of democratic states undermines the social conditions necessary for a healthy democracy. Specifically, the rise of a transnational digital sphere dominated by major digital platforms undermines traditional media gatekeepers capacity to moderate the public debate. Without this guiding hand and without any legal regulation, the public debate quickly devolves, as is evident from the increasingly divisive impact online hate speech has on democratic societies. When tried and true social mechanisms – such as traditional media – are rendered ineffective, it makes sense to counteract the effects of hate speech and stabilize public debate by turning to legal speech regulation such as NetzDG. In at least this sense, we are likely to be better off with an internet influenced by European norms.
feeds and search engine results, are the prism through which we acquire
information in our digital age. Critics ascribe many social and political
woes—such as the prevalence of misinformation and political
division—to the fact that we view our world through the personalized
and atomized prism of recommendation artificial intelligence. The way
the great powers of the internet—the United States, the European
Union, and China—choose to regulate recommendation algorithms will
undoubtedly have a serious impact on our lives and political well-being.
On December 31, 2021, the Cyberspace Administration of China,
a governmental internet watchdog, published a bombshell regulation
directed at recommendation algorithms. These regulations, which came
into effect in March 2022, exponentially increase the control and
autonomy of Chinese netizens over their digital life. At the same time,
the regulation will greatly increase the control the Chinese government
has over these algorithms. In this timely essay, we analyze the content
of the regulation and situate it in its historical and political context.
The first set of legitimation strategies mimics legal institutions such as bureaucracies and constitutional courts. These attempts fundamentally misunderstand the reason why law is legitimated in modern societies. Platforms seem to think that merely adopting legal symbolism and forms can provide legitimation on its own. However, law in modern societies is legitimated not only through procedural and formal justice, but also because it exists in the context of a state and is perceived as authored by the political community. By stressing the how of law, platforms miss the fundamental question: Why should we allow Mark Zuckerberg, Bytedance, or the Twitter board to possess such incredible power over the digital public sphere?
The second set of legitimation strategies focuses on mimicking powerful non-legal organizations such as large tech firms and mass media outlets. These attempts fail in a different way. By echoing the arguments of corporations and civil society organizations, platforms do attempt to provide an answer to why they should exert power over the public sphere. However, these answers are fundamentally flawed: Social media platforms are too public to be fully private and too concerned with profit to be believed to act in the public interest.
Thus, social media platforms are currently unable to resolve their legitimation crisis. However, it is unlikely that they are going to disappear: An alternative to a public sphere without legitimate platforms is not a future without platforms but a future with delegitimated, tyrannical ones. We believe that the failures described in this Article reveal that successfully providing a reason as to why platform power is legitimate will require a significant change in the way social media platforms operate, conduct their
business, and ultimately conceive of themselves.
In this Article, we argue that the reason for the ineffectiveness of truth-based solutions—such as fact-checking— is that they do not reach the heart of the problem. Both scholars and policymakers share the implicit or explicit belief that the rise of digital fake news is harmful mainly because it spreads false information, which lays a rotten groundwork for both individual decisions and collective policy making. While acknowledging the importance of accurate information, we argue that the main problem with fake news is not that it is false. Instead, what is distinctly threatening about digital misinformation is its ability to circumvent and undermine common knowledge-producing institutions including the sciences, courts, medical and other professions, and the media. The fundamental challenge is the fragmentation of our societies into separate epistemic communities. This shakes the factual common ground on which we stand. What does fact-checking matter if twenty percent of the population thinks that the fact-checkers are chronic liars? We call this new reality the Digital Epistemic Divide.
Epistemic fragmentation of society is both more fundamental and more dangerous than the harms of false information as such. It is more fundamental because once a society is epistemically fragmented, the lack of trust in common epistemic authorities will inevitably proliferate disagreement over factual beliefs. It is more dangerous because it can exacerbate political polarization. It is one thing to believe that the other side of a political issue holds wrong values and preferences; it is quite another to believe that they are either constantly lying or deeply manipulated.
To bridge the digital epistemic divide, we must go beyond truth-based solutions and implement policies to reconstitute societal trust in common epistemic authorities.
Until recently, American free speech norms have dominated the content moderation policies of digital media platforms. First Amendment norms are extremely resistant to censorship and therefore very protective of offensive and hateful speech. However, in recent years, this influence has been gradually eroded by what could be called European free speech norms, which are significantly more comfortable with directly regulating speech to try to prevent social and political harm. The epitome of the European approach is Germany s Network Enforcement Act (NetzDG), which requires platforms to enforce domestic hate speech laws within that country s borders. This general transformation, and NetzDG especially, have been met by nearly unanimous rebuke by digital free speech scholars, who argue that such measures might steer the platforms into creating a public sphere in which speech is stunted, and the values of free speech are not upheld.
While acknowledging (to some extent, at least) the strength of these critiques, this Article argues that they may well be outweighed by how laws like NetzDG respond effectively to one of the major challenges of the new digital platform public sphere: its detachment from civil society and the public discourse of particular democratic societies. Digital platforms are moderating the digital public sphere from nowhere. This disconnection between the new information gatekeepers (the platforms) and the circumstances and needs of democratic states undermines the social conditions necessary for a healthy democracy. Specifically, the rise of a transnational digital sphere dominated by major digital platforms undermines traditional media gatekeepers capacity to moderate the public debate. Without this guiding hand and without any legal regulation, the public debate quickly devolves, as is evident from the increasingly divisive impact online hate speech has on democratic societies. When tried and true social mechanisms – such as traditional media – are rendered ineffective, it makes sense to counteract the effects of hate speech and stabilize public debate by turning to legal speech regulation such as NetzDG. In at least this sense, we are likely to be better off with an internet influenced by European norms.