Hate Speech On Social Media Networks: Towards A Regulatory Framework?
Hate Speech On Social Media Networks: Towards A Regulatory Framework?
Natalie Alkiviadou
To cite this article: Natalie Alkiviadou (2019) Hate speech on social media networks: towards
a regulatory framework?, Information & Communications Technology Law, 28:1, 19-35, DOI:
10.1080/13600834.2018.1494417
ABSTRACT KEYWORDS
Social networks serve as effective platforms in which users’ ideas can Social media; hate speech;
be spread in an easy and efficient manner. However, those ideas can non-discrimination; Internet;
be hateful and harmful, some of which may even amount to hate code of conduct on illegal
hate speech
speech. YouTube, Facebook and Twitter have internal regulatory
policies in relation to hate speech and have signed a Code of
Conduct on the regulation of illegal hate speech with the
European Commission. This paper looks at the issue of tackling
hate speech on social networks and argues that, notwithstanding
the weaknesses of internal policies and their implementation,
their existence, as facilitated by the Code of Conduct, serves as a
light at the end of the Internet hate tunnel where issues of
multiple jurisdictions as well as technological realities, such as
mirror sites and more, have resulted in the task of online
regulation being more than a daunting one.
1. Introduction
Social networks are the frenzy of the twenty-first century. The latest statistics show that
there are 2.19 billion Facebook users,1 1.57 billion YouTube users2 and 336 million
Twitter users.3 Social networks facilitate borderless communication, allow for, inter alia,
political, ideological, cultural and artistic expression, permit an inflow of daily news,
raise awareness on human rights violations and offer a quick and cheap solution to inviting
people to your birthday party. At the same time, social networks constitute platforms
through which hateful rhetoric is spread4 and normalised and minority groups are system-
atically targeted, thereby affecting today’s world on a micro (individual), meso (group) and
macro (societal) level. Hate existed before the Internet and social networks but the emer-
gence of the Internet and the subsequent creation of social networks have added new
dimensions to the already complex topic of hate speech.5 An important observation
needs to be made from the outset, namely the distinction between examining hate on
the Internet and examining hate on social networks. The Internet is a global platform
which allows for the creation of, amongst others, social networks, news portals and chat
rooms. As noted by the Secretary General of the United Nations, Internet use for the objec-
tive of promoting hateful expression is one of the most significant human rights chal-
lenges that has come about with technological developments.6 This paper will not look
at the regulation of the Internet in its entirety but, instead, focus on the ever powerful
tool found on the Internet, namely social networks which, as stated by one commentator,
represent ‘incredible and unique communication opportunities.’7 Particular attention
needs to be paid to the issue of hate on such networks, rather than simply looking at
them as part of the general discussion on Internet hate regulation for several reasons.
Firstly, the sheer number of users of such networks on a global scale results in the need
to pay particular attention to this digital vehicle. Secondly, social networks are used by
individual users but also by organised and semi-organised groups to promote hateful
rhetoric and target the victims of such rhetoric. Thirdly, social networks come with
some kind of content regulation which must be assessed for purposes of ascertaining
whether or not and, if so, the extent to which this regulation contributes to the
effective tackling of online hate. However, tackling hate on social media is a complex
matter with an array of issues that need to be dealt with. Firstly, as is the case with hate
speech more generally, there is no universally accepted definition, probably given the
fact that ‘there is no universal consensus on what is harmful or unsuitable’8 in this
sphere. This means that there cannot be coherence amongst national legal frameworks,
which, given the nature of the Internet as a global medium, is necessary if hate speech
is to be regulated in an effective manner.9 Further, determining the best recipe for tackling
hate speech on social media is a multi-faceted process. Is it regulation and prohibition?
What type of regulation are we contemplating? Is it digital prohibition or criminalisation?
Is regulation more generally irrelevant or insufficient? Should we focus on other innovative
means to ensure sustainability such as, for example, the promotion of counter-narratives?
Is it either or both? The position of the Council of Europe, which is the only institution to
draw up a legal document on online (racist and xenophobic) hate is clear. In the Preamble
to the Additional Protocol to the Convention on Cybercrime, concerning the criminalisa-
tion of acts of a racist and xenophobic nature committed through computer systems,
reference is made to the ‘risk of misuse or abuse of such computer systems to disseminate
racist and xenophobic propaganda’ and, although sensitive to the issue of free
expression,10 it was decided, as demonstrated in the title, to criminalise digital acts of
racism and xenophobia. It has been argued that the ‘harshest’ of approaches to online
hate speech, namely criminalisation, has been taken by the Council of Europe for fear
6
The Secretary-General, ‘Preliminary Representation of the Secretary-General on Globalization and Its Impact on the Full
Enjoyment of All Human Rights’ paras 26–28, U.N. Doc A/55/342 (Aug 31 2000).
7
Leandro Silva, Mainack Mondal, Denzil Correa & Fabrício Benevenuto, ‘Analyzing the Targets of Hate in Online Social
Media’ Proceedings of the Tenth International AAAI (Association for the Advancement of Artificial Intelligence) Confer-
ence on Web and Social Media (2016) 687.
8
Irene Nemes, ‘Regulating Hate Speech in Cyberspace: Issues of Desirability and Efficacy’ (2010) 11 Information and Com-
munications Technology Law 3,195.
9
For analysis of the issue of jurisdiction and online hate regulation look at: Natalie Alkiviadou, ‘Regulating Internet Hate: A
Flying Pig?’ (2016) 7 Journal of Intellectual Property, Information Technology and E-Commerce Law 3.
10
Preamble to the Additional Protocol to the Cybercrime Convention states that the Contracting Parties are ‘mindful to the
need to ensure a proper balance between freedom of expression and an effective fight against acts of a racist and xeno-
phobic nature.’
INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 21
of such phenomena causing social unrest and damage to the institution’s mandate which
is peace and unity.11 As such, this paper will look at the issue of hate speech on social net-
works and the tools available for tackling this speech which, to date, embrace the regu-
lation of such speech through its removal by the networks themselves and through
criminalisation. In light of the Code of Conduct agreed upon in 2016 between the Euro-
pean Commission and four IT companies for the regulation of hate speech online, and
given that three of those companies constitute the leading social networks of our time,
this article will pay particular attention to those, namely Facebook, Twitter and YouTube.
speech is ever present on the Internet and has an ability to cause harm to its targets19 The
Council of Europe Committee of Ministers Declaration on freedom of communication on
the Internet,20 underlined the necessity to ensure freedom of speech and freedom of infor-
mation, but it also stressed that ‘freedom of communication on the Internet should not preju-
dice the human dignity, human rights and fundamental freedoms of others, especially
minors.’ However, drawing lines between ‘competing’ rights and freedoms and working
with relatively abstract notions such as harm and dignity is not straightforward. The way
such notions have been looked at by law and by regulatory policies of social networks will
be discussed in section three below.
When looking at the regulation of online hate on a practical scale, there is a central differ-
ence between looking at the Internet in its entirety and looking at social networks in particu-
lar. With the former, issues of jurisdiction may arise in relation to, for example, the publication
of material which is accessible but illegal in one country but legal in the country hosting the
website. Resolving the issue of whether material is impugned or not requires cooperation
and agreement between the two countries involved, as was seen in, amongst others, the
Yahoo! Inc. v La Ligue Contre Le Racism et L’Antisemitisme et al.21 Regulation of social networks
by social networks make the issue of hate on such networks different than hate on many
other Internet ‘tools.’ The three networks looked at in this paper have rules of their own
vis-à-vis prohibited content and, as mentioned in the introduction, a Code of Conduct has
been agreed between the European Commission and IT Companies in relation to the IT
Companies’ role in regulating hate speech. One commentator held that the Internet is
‘even worse than a vandalised library because thousands of additional unorganised frag-
ments are added daily by myriad cranks, sages and persons with time on their hands who
launch their unfiltered messages into cyberspace.’22 On one level, that of the Internet in
its entirety, this is partly accurate although, given the regulatory policies of social networks
and the enhancement of such policies by the Code of Conduct, material may be unfiltered to
begin with but the possibility of it being removed does exist.
Although broad in the sense that it includes the justification of hatred as a form of hate
speech, it is narrow in terms of content. More specifically, where is the protection of
groups such as LGBTI (Lesbian, Gay, Bisexual, Transgender and Intersex) and disabled
persons? This is a question that comes up in several documents, binding and non-
binding, which seek to tackle the issue of hate, discussed in section three below. The
Additional Protocol to the Convention on Cybercrime deals only with acts of a racist and
xenophobic nature, the Framework Decision of the European Union chose to deal only
with Racism and Xenophobia while there is no counterpart of the International Convention
on the Elimination of All Forms of Racial Discrimination that opted to deal with the protec-
tion of victims of homophobic, biphobic or transphobic speech. Whilst the Disability Con-
vention does exist, it does not tackle the issue of hate speech. Since international human
rights law contends that all humans are born free and equal in dignity and in rights,
27
why should only racist and xenophobic speech be prohibited by an international docu-
ment? This malaise in the regulatory framework has led to what can be referred to as a hier-
archy of hate, where protection against hate speech is granted to victims of only some
‘genres’ of hate speech. The policies and procedures of two out of the three social networks
discussed in this paper opt for a much broader definition of hate speech which does not
omit entire groups which are significantly and systematically victims of hate speech (and
not only), namely LGBTI persons, as well as other groups including, but not limited to, dis-
abled persons. Given the severity, in human rights terms, of ignoring marginalised groups
such as LGBTI persons and openly prioritising a certain type of hate speech, this differen-
tiation cannot be justified. Documents such as the Framework Decision and the Additional
Protocol result in criminal penalties whereas the policy, terms and conditions of a social
network site will result in the removal of a post or, in the worst case scenario, the
banning of a user. So, the community guidelines and terms of two out of the three
largest social networks do not ignore groups such as LGBTI persons. However, the effects
of their regulatory action are softer than the effects of, for example, the implementation
of a national law transposing the EU’s Framework Decision on Racism and Xenophobia.
24
Leandro Silva, Mainack Mondal, Denzil Correa & Fabrício Benevenuto, ‘Analyzing the Targets of Hate in Online Social
Media’ Proceedings of the Tenth International AAAI (Association for the Advancement of Artificial Intelligence) Confer-
ence on Web and Social Media (2016) 688.
25
As stated by the ECtHR in Handyside v UK, Application no. 5493/72 (ECHR 1976).
26
Council of Europe’s Committee of Ministers Recommendation 97 (20).
27
Article 1, Universal Declaration of Human Rights.
24 N. ALKIVIADOU
Facebook community standards refer directly to the removal of hate speech, defining it as:
‘content that directly attacks people based on their race, ethnicity, national origin, religious
affiliation, sexual orientation, sex, gender or gender identity or serious disabilities or diseases.’
Further, organisations and people who are dedicated to ‘promoting hatred against these pro-
tected groups are not allowed a presence on Facebook.’29
Twitter does not refer to hate speech but, instead, warns (rather than prohibits) users that
they may be exposed to content that might be ‘offensive, harmful, inaccurate or otherwise
inappropriate … ’ Importantly, its terms provide that it ‘may not monitor or control the
Content posted via the Services and, we cannot take responsibility for such Content.’30
The only prohibition is that of ‘direct, specific threats of violence against others.’31 The
case of Twitter is a paradox given that, as will be discussed later on, it is part of the
Code of Conduct on Countering Illegal Hate Speech Online which requires the IT Compa-
nies to, inter alia, remove such speech within 24 h of receiving a report. Therefore, both
YouTube and Facebook include a larger sphere of potential victims of hate speech than
key documents such as the International Convention on the Elimination of All Forms of
Racial Discrimination, the Framework Decision on Racism and Xenophobia or the
Additional Protocol to the Cybercrime Convention do. In addition, Facebook prohibits
speech which ‘attacks’ people based on the aforementioned characteristics and
YouTube prohibits speech which ‘attacks or demeans’ a group. Three issues arise here.
Firstly, YouTube has the widest scope of prohibited activity as it also incorporates the
demeaning of persons. Secondly, notwithstanding the widest scope, YouTube refers to
the prohibition of speech against a certain group rather than a person belonging to
that group. Does this mean that if a particular expression were directed against an individ-
ual who belonged to a group because of that group’s characteristics (rather than against
the group in its entirety), such speech would not be prohibited? Given the general content
of the Code of Conduct between the European Commission and the IT Companies elabo-
rated on below, it is more likely that it is an issue of the wrong use of language. Thirdly,
notwithstanding the Code of Conduct, Twitter seems to limit the control of speech on
its network, unless this amounts to a direct and specific (rather than an abstract and gen-
eralised) threat of violence. This approach is stricter than the legal documents referred to
above which bring about criminal penalties.
28
YouTube’s Community Guidelines: <https://www.youtube.com/yt/policyandsafety/communityguidelines.html>
[Accessed 1 May 2017].
29
Facebook’s Community Guidelines: <https://www.facebook.com/communitystandards#hate-speech> [Accessed 2 May
2017].
30
Twitter’s Terms of Service (Content): <https://twitter.com/tos?lang=en#usContent> [Accessed 1 May 2017].
31
Ibid.
INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 25
32
Countries such as Belgium and Austria: <https://treaties.un.org/Pages/ViewDetails.aspx?src=IND&mtdsg_no=IV-
2&chapter=4&lang=en#EndDec> [Accessed 2 May 2017].
33
Article 7, Framework Decision on combating certain forms and expressions of racism and xenophobia by means of crim-
inal law.
34
Preamble, Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and
xenophobic nature committed through computer systems.
35
Fernne Brennan, ‘Legislating against Internet Race Hate’ (2009) 18 Information and Communications Technology Law 2,
124.
36
Alexander Tsesis, ‘Destructive Messages: How Hate Speech Paves the Way for Harmful Social Movements’ (eds. NUY Press
2002) pg.138.
37
Lashel Shaw, ‘Hate Speech in Cyberspace: Bitterness without Boundaries’ (2012) 25 Notre Dame Journal of Law, Ethics &
Public Policy, 282.
38
Friedrich Kubler, ‘How Much Freedom for Racist Speech? Transnational Aspects of a Conflict of Human Rights’ (1998) 27
Hofstra Law Review 2, 335.
39
The link between dignity and hate speech has been made by authors such as Richard Abel in ‘Speaking Respect, Respecting
Speech’ (eds. University of Chicago Press 1998).
26 N. ALKIVIADOU
40
Roger Kiska, ‘Hate Speech: A Comparison Between The European Court of Human Rights and the United States Supreme
Court Jurisprudence’ (2012) 25 Regent University Law Review 107, 110.
41
Robert C.Post, ‘Racist speech, Democracy and the First Amendment’ (1991) 32 William and Mary Law Review 267, 322.
42
George Wright, ‘Dignity and Conflicts of Constitutional Values: The Case of Free speech and Equal Protection’ (2006) 43
San Diego Law Review 527, 566.
43
Report of the Special Rapporteur, Mr. Abid Hussain, submitted pursuant to Commission on Human Rights resolution
1997/26 (28 January 1998) E/CN.4/1998/40, para. 45.
INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 27
shall declare an offence punishable by law all dissemination of ideas based on racial superior-
ity or hatred, incitement to racial discrimination as well as all acts of violence or incitement to
such acts against any race or group of persons of another colour or ethnic origin … 44
In discussing this provision in Gelle v Denmark, the Committee on the Elimination of All
Forms of Racial Discrimination observed that:
it does not suffice, for purposes of Article 4 of the Convention, merely to declare acts of racial
discrimination punishable on paper. Rather, criminal laws and other legal provisions prohibit-
ing racial discrimination must also be effectively implemented by the competent national tri-
bunals and other State institutions. This obligation is implicit in Article 4 of the Convention.45
Article 20(2) of the International Covenant on Civil and Political Rights provides that ‘any
advocacy of national, racial or religious hatred that constitutes incitement to discrimi-
nation, hostility or violence shall be prohibited by law.’ The above two documents were
designed in a pre-Internet era and, thus, online hate was not a consideration. However,
the provisions can be used to regulate hate speech found on the Internet as it is not
the objective and effects of the phenomenon that has changed but, rather, the vehicle
it uses for dissemination. The downside of these provisions is that, probably due to
socio-historical reasons at the time of drafting, these provisions only tackle racist and reli-
giously discriminatory speech. The threshold of the two documents is similar, talking of
hatred, discrimination and violence as a result of the impugned speech, with the Inter-
national Convention on the Elimination of All Forms of Racial Discrimination incorporating
the prohibition of ideas of racial superiority.
On a European Union level, the central document that can be used for the criminalisation
of hate speech is the Framework Decision on combatting certain forms and expressions of
racism and xenophobia by means of criminal law.46 Although this document does not
directly define hate speech, it prohibits different forms of expression and acts that fall
within the framework of ‘Offences Concerning Racism and Xenophobia.’ Further, this docu-
ment does not tackle the issue of online activity but neither does it exclude it. Article 1,
therein, entitled ‘offences’ concerning racism and xenophobia holds that:
(1) Each Member State shall take the measures necessary to ensure that the following
intentional conduct is punishable:
(a) publicly inciting to violence or hatred directed against a group of persons or a
member of such a group defined by reference to race, colour, religion, descent
or national or ethnic origin; and
(b) the commission of an act referred to in point (a) by public dissemination or dis-
tribution of tracts, pictures or other material.
44
Article 4 (a), International Convention on the Elimination of All Forms of Racial Discrimination.
45
Gelle v Denmark, Communication no. 34/2004 (15 March 2006) CERD/C/68/D/34/2004, para. 7.3. This was reiterated in
Jama v Denmark, Adan v Denmark and TBB-Turkish Union v Germany.
46
Council Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism
and xenophobia by means of criminal law.
28 N. ALKIVIADOU
So, the Protocol seeks to limit liability of, for example, Internet Service Providers (ISPs)
which had no intent for impugned material to be disseminated through their service.
However, it leaves the interpretation of intent to be a question of national law. The fallibi-
lity of the intervention of ISPs in relation to hate speech was manifested in the request
from Germany to Deutsche Telekom to prevent user access to the website of the revisio-
nist Ernst Zündel. Although Deutsche Telekom accepted this request, users in the USA
made the website’s content available to German users through mirror sites. Therefore,
this reflects that even if ISPs restrict available content, there are ways of overcoming
this restriction and making material available again.48
As noted in section two of this paper, the European Union and the Council of Europe, in
their respective documents, chose to focus solely on the criminalisation of racism and
xenophobia, disregarding other phenomena which are present in the region today such
as homophobia, biphobia and transphobia. Although some justification can be granted
to older United Nations documents which opted to look solely at race and religion,
given the particular social and historic contexts in which they were drafted, an analogous
47
Explanatory Report to the Additional Protocol to the Cybercrime Convention, Para 25.
James Bank, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 281.
48
INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 29
justification cannot be found with the Framework Decision which was passed in 2008. This
reality further reinforces the argument that a hierarchy of hate exists.
On a Council of Europe level, in the landmark case of Delfi v. Estonia49, the ECtHR ruled
that Internet intermediaries should remove defamatory comments against individuals and
that Internet news portals may be liable for offensive commentary made available thereon.
This case did not deal with social media, per se, but the central principles developed by the
ECtHR therein could be transferable to a social media setting. More particularly, the appli-
cant company was the owner of Delfi, one of the largest Internet news portals of Estonia. In
2006, it published an article entitled ‘SLK Destroyed Planned Ice Road.’ SLK was the
abbreviation of a shipping company and L. was a member of its board and the sole share-
holder at the time. The posting of the article led to 185 comments, 20 of which contained
personal threats and offensive language against L. The ECtHR held that by finding Delfi
liable for the defamatory comments, domestic courts were not in breach of Article 10,
given that the comments were insulting and threatening, the portal was professionally
managed and commercial and the measures taken by the portal to avoid damage to L
were insufficient.50 The emphasis placed by the ECtHR on the importance of tackling
hate speech is significant for the present discussion. More particularly, it held that:
… where third-party user comments are in the form of hate speech and direct threats to the
physical integrity of individuals, the member States may be entitled to impose liability on
Internet news portals if they fail to take measures to remove clearly unlawful comments
without delay, even without notice from the alleged victim or from third parties.51
As well as placing importance on the issue of hate speech, the ECtHR also underlined the
degree of control that Delfi had on what was made available on the portal. More particu-
larly, it held that Delfi exercised ‘a substantial degree of control over the comments pub-
lished on its portal’52 and that because it was part of the process of making public the
comments on the portal, it ‘went beyond that of a passive, purely technical service provi-
der.’53 Here, a parallel can be drawn with social media platforms including, amongst others,
Facebook and Youtube which are not merely technical but are part of the process of the
publishing of third party commentary through a notice and take down system like in
Delfi and in the case discussed immediately below. In line with the ECtHR’s judgement,
social media platforms could, therefore, be considered to have substantive control over
the comments published on them, a point that positively correlates, as per Strasbourg’s
view, with a duty on removing material when this contains hate speech and threats.
In 2016, the ECtHR passed another judgement in relation to the intermediary liability of
the Internet. The applicants were two Hungarian websites, MTE and INDEX. MTE was a self-
regulatory body of Hungarian Internet content providers and INDEX was a large Hungarian
news portal. Both allowed user generated commentary. In 2010, MTE published an opinion
about two real estate management websites. Later on, INDEX reproduced the opinion.
Comments were published by users against the estate managements, both on MTE’s
website and on INDEX’s portal. However, in this case, the Court found a violation of
49
Delfi AS v Estonia, App. No 64569/09 (ECHR 16 June 2015).
50
ibid para.156.
51
ibid para.100.
52
ibid para.153.
53
ibid para.146.
30 N. ALKIVIADOU
Article 10 and differentiated it from Delfi by noting that the comments in the case against
Hungary were ‘notably devoid of the pivotal element of hate speech’54 whilst MTE was a
regulatory rather than a commercial body, and its professional nature ‘was unlikely to
provoke heated discussions on the Internet.’55 Of paramount importance is the emphasis
placed by the Court on the existence of hate speech amongst the comments as a central
indicator for a non-violation of Article 10. Moreover, in both cases, the applicants had a
notice and take down system. In Delfi, however, this was deemed insufficient as it
allowed the material to remain publicly available for six weeks, causing damage to the
individual targeted.56 In MTE, the Court found that such a system was a good way to
balance conflicting rights and denoted the lower threshold of efficacy of such a system
if hate speech was not part of the commentary. 57 The implication in Delfi on the
efficacy of a control system is significant in the ambit of an online hate discussion
insofar as it imposes a strict responsibility on news portals rigorously to monitor and
swiftly take down hate speech. Following this line, a notice and take down procedure
on social media in itself does not demonstrate efficiency and sufficiency; it also needs
to be quick so as to limit the harm done on the targeted person or persons. This is
anyhow set out by the Code of Conduct, which requires IT companies to review report
material within 24 h.
4.3. Towards enhancing the legal regulation of hate speech on social media in
the european union
A relatively recent step taken on a European Union level to enhance the effectiveness of
the legal regulation of hate speech on social media is the proposed amendment to the
Audiovisual Media Services Directive.58 The proposal was adopted by the European Com-
mission in 2016 and incorporates, inter alia, provisions to prohibit hate speech for pur-
poses of aligning the Directive with the Framework Decision on Racism and
Xenophobia. 59 In this sense, the Directive will prohibit the transfer of material that
incites to violence and hatred directed against a group of persons or a member of such
a group defined by reference to sex, race, colour, religion, descent or national or ethnic
origin. In addition to the Framework Decision on Racism and Xenophobia tackling hate
speech, more generally, and the Code of Conduct agreed between the four IT companies
and the European Commission, this step demonstrates the severity which the European
Union seems to be slowly attaching to combatting hate speech and online hate speech.
The IT Companies are to review the majority of valid notifications for removal of illegal
hate speech in less than 24 h and remove or disable access to such content, if necessary.
At the end of 2016, the European Commission conducted the first monitoring exercise
to ascertain whether the social networks (YouTube, Twitter and Facebook) were doing
what had been agreed in the Code of Conduct. For a period of six weeks, a total of
twelve civil society organisations from nine different Member States reported a total of
six hundred cases of what they considered to be illegal hate speech online to the IT Com-
panies and recorded the responses, actions and timing of responses and actions. Out of
the six hundred notifications, in one hundred and sixty nine cases (28.2%) the content
was removed. Facebook removed the content in 28.3% of cases, Twitter in 19.1% and
YouTube in 48.5%. Further, in 40% of the cases, IT Companies reviewed the notification
on the same day and in 43% on the day after. Here, it must be reiterated that the Code
of Conduct requests that content is dealt with within 24 h, something which evidently
worked in less than half of the cases. Also, YouTube and Twitter were keener to remove
cases reported by trusted reporters.61 This status is given to particular individuals who
are, for example, members of organisations with a particular expertise on the issue of
hate speech and are, thus, considered to be trusted as reporters (at least more trusted
than regular users). However, the vast majority of users of social networks are normal
users who still come across hate speech and wish to report it. The over-reliance on
users is a large obstacle in relation to hate speech regulation on social networks. By bring-
ing in ‘trust issues’ of regular network users, as was demonstrated in the first monitoring
cycle described above, YouTube and Twitter reduced the effectiveness of the process. The
60
Facebook’s Community Guidelines: https://www.facebook.com/communitystandards#hate-speech [Accessed 2 May
2017].
61
Youtube: Removal of 29% of content if it was a normal user and 68% if it was trusted user: Twitter: Removal of 5% of
content if it was a normal user and 33% if it was a trusted user: <http://webcache.googleusercontent.com/search?q=
cache:VckMt2f4jiEJ:ec.europa.eu/newsroom/document.cfm%3Fdoc_id%3D40573+&cd=1&hl=en&ct=clnk&gl=cy>
[Accessed 2 May 2017].
32 N. ALKIVIADOU
second monitoring period was launched in March 2017 and lasted for a period of seven
weeks. An evaluation was carried out by NGOs and public bodies in a total of twenty-
four member states. This monitoring exercise reflected that the social networks made sig-
nificant progress in terms of their commitments under the Code of Conduct. For example,
social networks removed 59% of the reported content which was more than double the
percentage of the previous monitoring period. Further, the amount of notifications
reviewed within 24 h improved from 40% to 51%.62 A third monitoring exercise com-
menced in November 2017 and was completed by December 2017. On average, IT com-
panies removed 70% of the prohibited content and in 81.7% of the cases did so in less
than 24 h. By the third monitoring period, the trusted flagger issue, as described above,
improved as the only issue determined was that of feedback. More particularly, Twitter
and Youtube provided more feedback to trusted flaggers than to normal users.63
The process of reporting hate speech on social networks has been described by one
commentator (in the context of YouTube in particular) as an ‘over policing of hate
speech that unfairly infringes on users’ freedom of expression.’64 It is argued that the
network handlers responsible for reviewing reported material may remove it ‘merely
because they disagree with the viewpoint of the speaker, no matter how appropriate
others might find the content.’65 Although this commentator refers to past incidences
where YouTube has been found to restrict expression arbitrarily, the results of the first
monitoring period of the Code of Conduct discussed below demonstrate that, in most
cases, expression reported as hate speech by civil society organisations was not readily
removed by the IT companies. In discussing the alleged over-policing, it was argued
that, instead of a private review system, a user objecting to material should publically
comment on it for purposes of allowing public discussion which the handler of the
network could take into account and make a public decision which he/she explains.
This is argued on the basis of transparency and involvement of the local community in
removing hate speech from social networks.66 In theory, this might seem like a good
idea but, in practice, there are too many obstacles to allow it to materialise effectively.
These could include the potential wish for the reporting user to remain anonymous, the
endless and insubstantial commentary that could come under a public report, the use
of the commentary by haters to promote their speech further and the sheer amount of
material on social networks, particularly those such as YouTube, Facebook and Twitter
that operate on a global scale, which makes considering even private reports a tricky
issue, let alone a public report and an array of comments. Also, as argued by Oboler,
the CEO of the Online Hate Prevention Institute67, ‘the longer the content stays available,
the more damage it can inflict on the victims and empower the perpetrators.’68 Although
62
European Commission Press Release: Countering online hate speech: Commission initiative with social media platforms
and civil society shows progress (1 June 2017) <http://europa.eu/rapid/press-release_IP-17-1471_en.htm>.
63
European Commission Fact Sheet: Code of Conduct on countering illegal hate speech online: Results of the 3rd monitor-
ing exercise (January 2018) <http://webcache.googleusercontent.com/search?q=cache:OQDS0SZGbYAJ:ec.europa.eu/
newsroom/just/document.cfm%25253Fdoc_id%25253D49286+&cd=1&hl=en&ct=clnk&gl=cy> [Accessed 12 June 2018].
64
Lashel Shaw, ‘Hate Speech in Cyberspace: Bitterness without Boundaries’ (2012) 25 Notre Dame Journal of Law, Ethics &
Public Policy, 299.
65
ibid.
66
ibid 302.
67
Online Hate Prevention Institute: http://ohpi.org.au/ [Accessed 12 June 2018].
68
UNESCO ‘Countering Online Hate Speech’ (2015 UNESCO Publishing): <http://unesdoc.unesco.org/images/0023/002332/
233231e.pdf> [Accessed 1 May 2017].
INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 33
5. Conclusion
The Code of Conduct between the European Commission and the IT Companies is an inno-
vation in terms of regulating hate on social networks in the sphere of illegal hate speech.
The results of the first monitoring period results were not positive in terms of how
seriously YouTube, Twitter and Facebook took their commitments under this role.
Although the second monitoring period and, to a greater extent, the third monitoring
period did reflect an improvement in the commitment of social networks, there are still
fundamental problems in relation to the role and enforcement of the Code of Conduct
on Illegal Hate Speech. Firstly, although there is an improvement in the speed and
removal of content by the third monitoring cycle, it must be underlined that the social net-
works were aware of the monitoring cycles and the organisations and persons involved in
the process. We have no data in relation to how they respond to their duties under the
Code when they are dealing with reports from users beyond a monitoring exercise. Sec-
ondly, as extrapolated on above, the enforcement of the Code of Conduct is absolutely
reliant on the knowledge, will and intention of users. IT companies have to act upon
hate speech only after it has been reported. Moreover, this Code of Conduct has been
agreed between the companies and the European Commission, with the Commission,
of course, monitoring its implementation in relation to online hate in Member States
but not beyond. So this is something for the EU. It is, nevertheless, a template that
could be used by other countries and regions. Either way, regulation by social networks
and the role of the Code of Conduct are definitely significant in the sphere of Internet
hate regulation as social networks can remove material themselves and have a process
by which this can be done. Moreover, through the amendment to the Audiovisual Direc-
tive to incorporate the issue of hate speech and through the German Act to tackle illegal
content on social networks, which, although a national law, impacts other countries due to
the borderless nature of the Internet, it is apparent that the legal regulation of online hate
is going up on the European and some national agendas. A point to note here is that it was
expected that Germany would be the first country to pass a regulatory law of this magni-
tude on illegal content online due to its traditionally restrictive position on hate speech.
The above developments are definitely a light at the end of the Internet hate tunnel,
where issues of multiple jurisdictions as well as technological realities, such as mirror
sites and more, have resulted in the task of regulating hate online being more than a
daunting one. Had it not been for the internal process of regulation on social networks
which has been made semi-external given the role of the Commission, a neo-Nazi organ-
isation’s account which was blocked by Twitter would probably still be up and running and
spreading hate as well as the account of a far-right member of the European Parliament
who regularly tweeted homophobic statements.72 For some, this could even be a positive
thing according to the importance they attach to free speech and public discussion. If one
endorses international human rights law and documents such as the International Con-
vention on the Elimination of All Forms of Racial Discrimination, the Framework Decision
on Racism and Xenophobia and the Additional Protocol to the Cybercrime Convention,
such expression is unacceptable. However, hate speech is a contested topic with many
Anita Huslin, ‘Twitter Blocks Offensive Accounts in Germany, U.K.; Deletes Tweets in France.’: <http://www.npr.org/blogs/
72
the two-way/2012/10/19/163243194/twitter-blocks-offensive/accounts-in-germany-u-k-deletes-tweets-in-france>
[Accessed 1 May 2017].
INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 35
commentators and even national legislators arguing in favour of free speech. The current
regulatory process on social networks appears to be a middle ground for the different pos-
itions held in relation to what we should do with hate speech given that regulation on
such networks, such as removing a post, is less severe than, for example, imprisonment.
This position does not mean that actions to implement the national criminal law cannot
be taken. Either way, although the process of regulation by social networks is effective
in tackling online hate, regulation in itself is not sufficient. Other measures need to be
taken for purposes of tackling online hate in the long-term and for tackling online hate
which does not meet criminal law thresholds. In relation to the last point, a central
issue to hate found online, either on social networks or on other Internet platforms, is
what happens to the majority of hate speech which is not deemed illegal by national
law and/or by the Framework Decision on Racism and Xenophobia but still hurts
people, groups and societies? As noted by the United Nations Special Rapporteur on
Freedom of expression, there are three types of problematic expression, that which is a
criminal offence under international law, that which is not a criminal offence but can
result in restriction and civil suits and that which has no legal implications but still
raises issues relating to respect and tolerance.73 Assuming that type (a) and (b) can be
dealt with by the law as well as social network regulation, type (c), albeit potentially
being regulated by social networks, needs another solution. To this end, it is argued
that social networks, as well as facilitating regulation of online hate through the pro-
cedures described above, also constitute positive platforms through which counter-narra-
tives to hateful speech (illegal or not) can be developed. These can be done through
commentary on material which is hateful but has not been removed, campaigns
through groups and pages and more. As such, it is concluded that social networks
provide a space in which haters can utter and disseminate their hate. At the same time,
social networks also provide space for those who seek to respond to this hate and work
on altering the rhetoric for purposes of establishing an equilibrium in terms of ideas
and positions promoted online vis-à-vis minority groups (ethnic, sexual and more). More-
over, social networks, through their procedures of reviewing hateful material, as further
enhanced by the Code of Conduct, are, in terms of infrastructure, ideal settings in
which hate speech can be regulated.
Disclosure statement
No potential conflict of interest was reported by the author.
UNESCO ‘Countering Online Hate Speech’ (2015 UNESCO Publishing) 16: <http://unesdoc.unesco.org/images/0023/
73