Skip to content

Sensitive content needs to be handled better (NIP-36 & elsewhere) #315

@s3x-jay

Description

@s3x-jay

I've been working in the porn industry for roughly 15 years and I'm a "Community Ambassador" at XBiz.net (the leading B2B discussion forum for the adult industry). Based on that experience - I'm worried about how sensitive content is handled in Nostr. NIP-36 as spec'd won't be sufficient and doesn't cover all the places someone might encounter sensitive content. This is important to the adult industry because we in the adult industry do not want our content seen/accessed by people who shouldn't or don't want to see it.

#1 NIP-36 needs a defined vocabulary

NIP-36 is incredibly important, but it can't achieve what it needs to achieve because there's no defined vocabulary for "reason". Not that "reason" should be completely locked down with a strict vocabulary, but 99% of the use cases are predictable and those cases would benefit greatly from a defined vocabulary that can be translated, etc.

Without going into too much detail there have been numerous attempts to implement classification systems over the years and all have failed. Many were culturally biased (e.g. "Unsuitable for persons under 18 years of age" is different in Sweden and Saudi Arabia). Other classification systems were just too complicated for the technology at the time.

To me the classification system that had the most promise was ICRA, but it also failed. You can see it's vocabulary here…
https://web.archive.org/web/20080622002259/http://www.icra.org/vocabulary/

My suggestion would be to start with ICRA as a starting point and adjust it to fit the needs of Nostr.

For end users a Nostr client could just let them choose from the primary categories (which I've tweaked a bit):

(Casual) Nudity
Erotica ("soft core")
Sex ("hard core")
Violence
Language
Other potentially harmful topics

A Nostr client could show sub-categories once the user has chosen a main category - but that would be optional. (Keeping it simple is one of the lessons learned from the failure of ICRA.)

Automated systems would be encouraged to use a more detailed vocabulary (see below). The detailed vocabulary could go into a Nostr client's "preferences" section so users could specify whether certain types of content should 1) always be blocked, 2) never be blocked, or 3) the client should confirm with the user each time on a case-by-case basis.

With the vocabulary below, I'm envisioning an automated system would have a comma delimited list of one or more of the items in the vocabulary.

So for example, if I were posting "Someone in Chicago just faved this photo…" and I had an idea of what was in the photo I would use something like:

Sex - Penetrative sex acts, Context - Porn - Gay male

If I were posting something from one of my sponsors and didn't know the exact nature of what was in the photo I might only use:

Context - Porn - Gay male

Or if I had a user who was uploading a picture which hadn't been vetted but it was on a site which was typically sexually-explicit (e.g. a dating/hookup site or app) I might simply use:

Context - Personal sexual expression/exploration

If I were posting something regarding a sexual health issue I might use:

Nudity - Genitals/anus, Context - Medical

Here's my suggested vocabulary which builds on the concepts in the ICRA vocabulary…

Context - Fine Art
Context - Educational
Context - Medical
Context - News
Context - Sports
Context - Religion
Context - Fantasy/fiction
Context - Fantasy/fiction - Video game

Context - Porn
Context - Porn - Heterosexual
Context - Porn - Gay male
Context - Porn - Lesbian
Context - Porn - Bisexual
Context - Porn - Transexual
Context - Porn - Gender fluid / non-binary

Context - Personal sexual expression/exploration

Nudity
Nudity - Breasts
Nudity - Buttocks
Nudity - Genitals/anus
Nudity - Other

Erotica
Erotica - Speaking/text only (no visuals)
Erotica - Physical Product
Erotica - Attire
Erotica - Kissing
Erotica - Softcore fetish
Erotica - Erection (with no stimulation)
Erotica - Other

Sex
Sex - Speaking/text only (no visuals)
Sex - Obscured/implied sex acts
Sex - Masturbation
Sex - Non-penetrative sex acts
Sex - Penetrative sex acts
Sex - Hardcore fetish
Sex - Other

Violence
Violence - Assault/rape
Violence - Injury
Violence - Injury - human beings
Violence - Injury - animals
Violence - Injury - fantasy/animated characters
Violence - Blood and/or dismemberment
Violence - Blood and/or dismemberment - human beings
Violence - Blood and/or dismemberment - animals
Violence - Blood and/or dismemberment - fantasy/animated characters
Violence - Torture or killing
Violence - Torture or killing - human beings
Violence - Torture or killing - animals
Violence - Torture or killing - fantasy/animated characters
Violence - Other

Language
Language - Passing use of common expletives
Language - Substantial profanity/swearing
Language - Abusive
Language - Other

Potentially harmful
Potentially harmful - Smoking
Potentially harmful - Alcohol
Potentially harmful - Legal drug use
Potentially harmful - Illegal drug use
Potentially harmful - Weapons
Potentially harmful - Gambling
Potentially harmful - Encourages life-threatening activities
Potentially harmful - Fear/intimidation/horror/terror
Potentially harmful - Encourages discrimination of protected minority
Potentially harmful - Other

Choosing a main category would mean it could be one or more of the subcategories. Choosing an "Other" option means it's in that category, but not one of the specified subcategories.

I don't mean the list above to be definitive. It should have input from the types of people who would use it and/or implement it. It's more of a starting place for discussion. For example, how do you classify "Shower Girl"? Calling that photo "Heterosexual Porn" assumes a male viewer. Is it "porn" or "erotica"? So there needs to be discussion to get the above categories right (but not overly complicated). It's also quite important that the categories be allowed to evolve over time.

#2 Profiles need sensitive content classification

It would be helpful if users could mark their profiles with the classification system described above. Twitter has the rule of no sexual content in avatars and header pics so people don't come across offensive content accidentally. If people could classify their own profiles, then Nostr clients could blur the user's avatar and header/background photo by default for those profiles (if the user has that set in their preferences).

The Nostr clients could also ensure that some or all of the profile warnings are repeated by default on notes posted by the user (but clients could allow the warning to be tweaked prior to, or even after posting). This would save the user time and up the level of compliance.

#3 Community based content classification

There will be a lot of non-compliance with content warnings, so it would help if people could mark the content they see in their following/global feeds as sensitive in cases where there's no warning or they feel it's misclassified. Their classification could then be taken into consideration by others (most likely their followers).

The same approach could be taken so users could mark entire accounts as sensitive. So, for example, if the NRA joined Nostr and didn't mark their account "Potentially harmful - Weapons", I could mark it and that classification could be taken into consideration when me or one of my followers encountered their content.

I look forward to hearing others' thoughts on this…

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions