POST
/v1/chat/completionsChat Completions
Create chat completions using state-of-the-art language models. This endpoint is compatible with the OpenAI API format.
Request
Example Request
curl https://api.assisters.dev/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "assisters-chat-v1",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"temperature": 0.7,
"max_tokens": 1024
}'Parameters
modelrequiredstringID of the model to use. See the models catalog for available options.
Available models:
assisters-chat-v1- Fast and efficientassisters-chat-v2- High performanceassisters-vision-v1- Multimodal support
messagesrequiredarrayArray of message objects representing the conversation history.
Message object:
role(string) - One of: system, user, assistantcontent(string) - The message content
temperatureoptionalnumberdefault: 0.7Sampling temperature between 0 and 2. Higher values make output more random, lower values make it more focused and deterministic.
max_tokensoptionalintegerdefault: 1024Maximum number of tokens to generate in the completion.
top_poptionalnumberdefault: 1Nucleus sampling parameter. Consider tokens with top_p probability mass.
streamoptionalbooleandefault: falseIf set to true, partial message deltas will be sent as server-sent events. Currently not supported.
Response
Example Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1704067200,
"model": "assisters-chat-v1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 9,
"total_tokens": 29
}
}Response Fields
id- Unique identifier for the completionobject- Object type, always "chat.completion"created- Unix timestamp of creation timemodel- Model used for completionchoices- Array of completion choicesusage- Token usage statistics
Error Codes
| Code | Description |
|---|---|
401 | Invalid or missing API key |
400 | Invalid request body or parameters |
403 | API key doesn't have access to this model |
429 | Rate limit exceeded or insufficient quota |
500 | Server error |
Code Examples
Python
example.py
from assisters import Assisters
client = Assisters(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="assisters-chat-v1",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing"}
],
temperature=0.7,
max_tokens=1024
)
print(response.choices[0].message.content)Node.js
example.js
import Assisters from 'assisters';
const client = new Assisters({
apiKey: process.env.ASSISTERS_API_KEY
});
const response = await client.chat.completions.create({
model: 'assisters-chat-v1',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing' }
],
temperature: 0.7,
max_tokens: 1024
});
console.log(response.choices[0].message.content);