POST/v1/chat/completions

Chat Completions

Create chat completions using state-of-the-art language models. This endpoint is compatible with the OpenAI API format.

Request

Example Request
curl https://api.assisters.dev/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "assisters-chat-v1",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Parameters

modelrequiredstring

ID of the model to use. See the models catalog for available options.

Available models:

  • assisters-chat-v1 - Fast and efficient
  • assisters-chat-v2 - High performance
  • assisters-vision-v1 - Multimodal support
messagesrequiredarray

Array of message objects representing the conversation history.

Message object:

  • role (string) - One of: system, user, assistant
  • content (string) - The message content
temperatureoptionalnumberdefault: 0.7

Sampling temperature between 0 and 2. Higher values make output more random, lower values make it more focused and deterministic.

max_tokensoptionalintegerdefault: 1024

Maximum number of tokens to generate in the completion.

top_poptionalnumberdefault: 1

Nucleus sampling parameter. Consider tokens with top_p probability mass.

streamoptionalbooleandefault: false

If set to true, partial message deltas will be sent as server-sent events. Currently not supported.

Response

Example Response
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1704067200,
  "model": "assisters-chat-v1",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 9,
    "total_tokens": 29
  }
}

Response Fields

  • id - Unique identifier for the completion
  • object - Object type, always "chat.completion"
  • created - Unix timestamp of creation time
  • model - Model used for completion
  • choices - Array of completion choices
  • usage - Token usage statistics

Error Codes

CodeDescription
401Invalid or missing API key
400Invalid request body or parameters
403API key doesn't have access to this model
429Rate limit exceeded or insufficient quota
500Server error

Code Examples

Python

example.py
from assisters import Assisters

client = Assisters(api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="assisters-chat-v1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing"}
    ],
    temperature=0.7,
    max_tokens=1024
)

print(response.choices[0].message.content)

Node.js

example.js
import Assisters from 'assisters';

const client = new Assisters({
  apiKey: process.env.ASSISTERS_API_KEY
});

const response = await client.chat.completions.create({
  model: 'assisters-chat-v1',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing' }
  ],
  temperature: 0.7,
  max_tokens: 1024
});

console.log(response.choices[0].message.content);