Platform Playbook: Handling Non-Consensual Intimate Imagery (NCII)

Platform Playbook: Handling Non-Consensual Intimate Imagery (NCII)
If you run a website, forum, or chat community, you will eventually receive reports about highly sensitive content. How you respond matters. This post outlines a practical playbook to handle NCII reports with speed, empathy, and operational clarity.
NCII is high-severity abuse. Treat it like a security incident: urgent, confidential, and handled by trained staff.
Define severity and routing
At minimum, your reporting system should route NCII into an “urgent” queue and notify an on-call responder.
- Urgent: explicit content shared without consent, extortion threats, content involving minors (immediately escalate).
- High: harassment campaigns, repeated re-uploads, doxxing combined with intimate imagery.
- Standard: adult content that violates policy but is not clearly non-consensual.
Intake: what to collect (without burdening the reporter)
- Direct URL(s) and mirrors
- Username/account ID(s)
- Approximate time of posting
- Whether the reporter is the subject or a representative
Avoid requesting unnecessary personal details. If identity verification is required, provide safe alternatives and explain why.
Response timeline (a pragmatic baseline)
- Acknowledge quickly: within hours, not days.
- Remove fast: temporary removal while investigating is often appropriate for NCII.
- Prevent re-uploads: hashes, fingerprinting, and strict repeat-offender controls.
Do not send the reporter detailed “proof” screenshots back. That can re-traumatize and increase exposure. Confirm actions taken without redistributing the content.
Evidence handling and privacy
- Restrict access to the content to the smallest trained group.
- Log every access (who, when, why).
- Minimize retention: keep only what is needed for enforcement and legal obligations.
Enforcement: consistency beats severity theater
For confirmed NCII:
- Remove the content and associated mirrors
- Suspend or ban accounts involved in upload/distribution
- Block known re-upload patterns (new accounts, same fingerprints)
- Preserve limited evidence for legal requests (jurisdiction-specific)
Communication templates (keep it simple)
- Acknowledge receipt
- Confirm the priority level
- Provide an ETA range
- Explain what you need (if anything)
- Confirm removal and prevention steps (once done)
Closing thoughts
You cannot “moderate later” on high-harm content. A clear playbook, trained responders, and privacy-first evidence handling reduce harm and improve trust.
