0% found this document useful (0 votes)
473 views4 pages

Content Moderation

Uploaded by

Laura Lain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
473 views4 pages

Content Moderation

Uploaded by

Laura Lain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Gatekeeping and Content Moderation: Platform Approach (Week 10 - Nov 18)

Key Concepts

1. Gatekeeping

●​ Definition: The process through which information is filtered for dissemination, whether
for publication, broadcasting, the Internet, or other modes of communication.
●​ Barriers to Market Entry: Traditionally, freedom of the press was accessible to those
who owned the means of production (e.g., newspapers, television). Today, access to
information is more open through digital technology, but control still exists.

2. Internet Governance by Platforms

●​ Private information intermediaries, like social media platforms, enact governance


through platform design choices and user policies (DeNardis & Hackl, 2015, p. 762).

3. Content Moderation

●​ Definition: The organized practice of screening user-generated content (UGC) to


determine appropriateness for a site, locality, or jurisdiction (Roberts, 2017, p. 1).
●​ Need for Moderation: Platforms must enforce rules and comply with applicable laws to
avoid liability from inappropriate content (Roberts, 2017, p. 1).
●​ Variation Across Platforms: Moderation styles differ by platform based on brand
reputation, tolerance for risk, and desired user engagement (Roberts, 2017, p. 1).

Content Moderation Techniques

1. Badging

●​ Definition: Use of visuals or writing to signal content or profile credibility or to highlight


problematic content.

2. Removing Users

●​ Definition: Deleting or banning user accounts that violate platform policies or


government laws.

3. Directing Users

●​ Definition: Providing supplementary information and sources to engage users with a


topic.
○​ Example: YouTube attaches Wikipedia links under videos for additional context,
but concerns exist about Wikipedia's open-edit nature, which could spread false
information.
4. Removing Content

●​ Definition: Deleting or banning content that violates platform policies or government


laws.
○​ TikTok Content Removal: Child sexual exploitation, terrorist offenses, hate
speech, violent crime, privacy breaches, non-consensual image sharing, and
information-related offenses can all prompt content removal.

Platform-Specific Approaches

1. Facebook

●​ News Story Updates: Allows publishers to link multiple updates on a single news story,
and users can opt to receive notifications for related updates.

2. TikTok

●​ User-Reported Offenses: Users can report problematic content like phishing, scam
videos, sexual content, extremist videos, etc.
●​ Shift Towards AI: TikTok is increasing reliance on AI for content moderation, claiming
80% of guideline-violating content is now removed by automated systems.
●​ Legal Pressures: Regulatory pressure in Malaysia requires social media operators to
apply for licenses by January to combat cyber offenses.

3. Google

●​ Content Moderation Rules: Rules on what stays or goes are determined by the firm
needing the moderation service. Different platforms have varying risk tolerance and
rules.

4. YouTube

●​ Volume of Content: Processes 300 hours of video uploads per minute, requiring
large-scale human moderation alongside algorithmic support.

Problems with Content Moderation

1. Human Moderation Issues

●​ Job Conditions:
○​ Content moderators are often employed by third-party firms for low wages,
lacking full-time employment benefits.
○​ They work in stressful conditions, sometimes viewing graphic, harmful, or
traumatizing content.
○​ Companies may offer mental health support, but outsourced moderators (e.g.,
those outside Silicon Valley) may not have access to it.
○​ Burnout: Moderators face burnout from overexposure to graphic content or
desensitization, which reduces their effectiveness.
○​ Workload: Moderators must maintain speed and accuracy, with inactivity
penalties (like computer shutdowns) imposed on TikTok's moderation team.

2. Algorithmic Issues

●​ Definition of Algorithms: Set of instructions, typically written by humans, used to solve


tasks through computation.
●​ Three Key Components:
1.​ The instructions (the algorithm itself)
2.​ The data to be analyzed
3.​ The use of the results
●​ Challenges: Algorithms can reflect human biases in coding and lack the nuanced
understanding that human moderators provide.

Human Moderation vs. Algorithmic Moderation


Algorithm Human

Definition: Set of instructions, Definition: Human review and assessment


automated

Characteristics: Analyzes data and Characteristics: Contextual review based on


produces results platform policies

Pros: Quick, efficient, less expensive Pros: Nuanced understanding of complex


content

Cons: Prone to bias, limited nuance Cons: Risk of burnout, exposure to graphic
content

Insights from Key Scholars

1. Sarah Roberts

●​ Decision-Making on Content: Moderation decisions are dependent on social, cultural,


and firm-specific norms.
●​ Transparency: The public should be aware of who produces and moderates the content
they engage with. Moderation decisions have political and economic implications, not
just technical ones.
●​ Mental Health & Burnout: Roberts emphasizes that burnout is an inevitable outcome of
moderation work. Without mental health support, moderators may become desensitized
or unable to handle the role.
●​ Public Sphere Encroachment: The shrinking public sphere means that platforms like
Facebook and YouTube play a larger role in shaping public discourse, but they do not
serve as substitutes for the public sector.

2. Gillespie (2020)

●​ Content Moderation Apparatus: Moderation is a "single, functioning apparatus" of


small teams, large numbers of flagged posts, and reliance on AI for volume
management.
●​ AI as "Simple Calculus": AI solves large-scale moderation challenges but lacks human
nuance.

Concluding Thoughts

●​ Content moderation is a complex, labor-intensive process involving human and


algorithmic efforts.
●​ The mental health of human moderators is a growing concern, as the work is often
outsourced to low-wage contractors.
●​ Platforms like TikTok, Facebook, YouTube, and Google have different moderation
practices, reflecting their brand, tolerance for risk, and user engagement goals.
●​ The use of AI in moderation is increasing, but AI has limitations in understanding
context, culture, and nuance.
●​ Transparency, accountability, and ethical considerations remain at the forefront of the
content moderation debate.

You might also like