POST/v1/moderate

Moderation

Detect harmful, inappropriate, or policy-violating content in text. Essential for user-generated content and AI output moderation.

Request

Example Request
curl https://api.assisters.dev/v1/moderate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Sample text to moderate for harmful content"
  }'

Parameters

inputrequiredstring | array

Text to analyze for harmful content. Can be a single string or array of strings.

modeloptionalstringdefault: text-moderation-stable

Moderation model to use. text-moderation-stable or text-moderation-latest.

Response

Example Response
{
  "id": "modr-abc123",
  "model": "text-moderation-stable",
  "results": [
    {
      "flagged": false,
      "categories": {
        "hate": false,
        "hate/threatening": false,
        "harassment": false,
        "self-harm": false,
        "sexual": false,
        "sexual/minors": false,
        "violence": false,
        "violence/graphic": false
      },
      "category_scores": {
        "hate": 0.00012,
        "hate/threatening": 0.00001,
        "harassment": 0.00023,
        "self-harm": 0.00002,
        "sexual": 0.00015,
        "sexual/minors": 0.00001,
        "violence": 0.00008,
        "violence/graphic": 0.00003
      }
    }
  ]
}

Category Definitions

CategoryDescription
hateContent expressing hate toward a group
hate/threateningHateful content with violence or harm
harassmentContent targeting or insulting individuals
self-harmContent promoting self-harm behaviors
sexualSexually explicit content
violenceContent promoting or depicting violence
violence/graphicGraphic depictions of violence

Code Examples

Python

moderation.py
from assisters import Assisters

client = Assisters(api_key="YOUR_API_KEY")

response = client.moderations.create(
    input="User submitted content to check"
)

result = response.results[0]
if result.flagged:
    print("Content flagged!")
    for category, flagged in result.categories.items():
        if flagged:
            score = result.category_scores[category]
            print(f"  {category}: {score:.4f}")
else:
    print("Content is safe")