Chat Completions
Generate AI text responses using the chat completions API.
Generate Text with AI Chat
The chat completions API allows you to have dynamic conversations with Unreal's advanced AI models. Use this endpoint to generate responses for chatbots, content creation, question answering, and more.
Quick Example
curl -X POST https://openai.ideomind.org/v1/chat/completions \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "r1-1776",
"messages": [
{ "role": "user", "content": "Explain quantum computing in simple terms" }
]
}'Response Format
{
"id": "chatcmpl-123abc456def789",
"object": "chat.completion",
"created": 1694268762,
"model": "r1-1776",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing is like having a very special deck of cards. Normal computers use regular cards that can only be face up or face down (0 or 1). But quantum computers use magical cards that can be face up, face down, or somehow both at the same time until you look at them! This ability to be in multiple states at once lets quantum computers solve certain problems much faster than regular computers."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 185,
"total_tokens": 199
}
}Request Parameters
| Parameter | Type | Description |
|---|---|---|
model | string | Required. ID of the model to use (e.g., r1-1776) |
messages | array | Required. Array of message objects with role and content |
temperature | number | Controls randomness (0-2, default: 1) |
max_tokens | integer | Maximum tokens to generate (default varies by model) |
stream | boolean | Whether to stream responses (default: false) |
Advanced Usage
Multi-Turn Conversations
Include the complete conversation history to maintain context:
{
"model": "mixtral-8x22b-instruct",
"messages": [
{ "role": "user", "content": "What's the weather like today?" },
{ "role": "assistant", "content": "I don't have access to real-time weather data." },
{ "role": "user", "content": "What information can you provide?" }
]
}Streaming Responses
Enable streaming for real-time generation:
curl -X POST https://openai.ideomind.org/v1/chat/completions \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "mixtral-8x22b-instruct",
"messages": [{ "role": "user", "content": "Hello" }],
"stream": true
}'Streamed responses are delivered as server-sent events:
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268762,"model":"mixtral-8x22b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":"Hello"},"finish_reason":null}]}