Guides
Advanced Features
Learn about advanced features of the Unreal OpenAI API
Advanced Features
This guide covers advanced features and techniques for working with the Unreal OpenAI API.
Streaming Responses
For applications that require real-time responses, you can use streaming:
import { OpenAI } from 'openai';
const openai = new OpenAI({
apiKey: 'your-unreal-api-key',
baseURL: OPENAI_BASE_URL,
});
async function streamCompletion() {
const stream = await openai.chat.completions.create({
model: 'your-preferred-model',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Write a short story about AI.' }
],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
}
streamCompletion();Function Calling
You can use function calling to enable the model to call functions you define:
async function functionCalling() {
const completion = await openai.chat.completions.create({
model: 'your-preferred-model',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What\'s the weather like in San Francisco?' }
],
functions: [
{
name: 'get_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
description: 'The unit of temperature to use',
},
},
required: ['location'],
},
},
],
function_call: 'auto',
});
const message = completion.choices[0].message;
if (message.function_call) {
const functionName = message.function_call.name;
const functionArgs = JSON.parse(message.function_call.arguments);
console.log(`Function called: ${functionName}`);
console.log(`Arguments: ${JSON.stringify(functionArgs, null, 2)}`);
// Here you would implement the actual function call
// For example:
// if (functionName === 'get_weather') {
// const weather = await getWeather(functionArgs.location, functionArgs.unit);
// return weather;
// }
}
}Request Batching
For applications that need to process multiple requests efficiently:
async function batchRequests() {
const prompts = [
'Explain quantum computing in simple terms',
'Write a haiku about programming',
'List 3 tips for productivity'
];
const requests = prompts.map(prompt =>
openai.chat.completions.create({
model: 'your-preferred-model',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: prompt }
],
})
);
const responses = await Promise.all(requests);
responses.forEach((response, index) => {
console.log(`Response for prompt "${prompts[index]}":`);
console.log(response.choices[0].message.content);
console.log('---');
});
}Rate Limiting and Retries
Implement rate limiting and retries to handle API limitations:
import { OpenAI } from 'openai';
import { setTimeout } from 'timers/promises';
class RateLimitedOpenAI {
private openai: OpenAI;
private maxRetries: number;
private retryDelay: number;
constructor(apiKey: string, baseURL: string, maxRetries = 3, retryDelay = 1000) {
this.openai = new OpenAI({
apiKey,
baseURL,
});
this.maxRetries = maxRetries;
this.retryDelay = retryDelay;
}
async createChatCompletion(params: any) {
let retries = 0;
while (true) {
try {
return await this.openai.chat.completions.create(params);
} catch (error: any) {
if (error.status === 429 && retries < this.maxRetries) {
// Rate limited, wait and retry
retries++;
console.log(`Rate limited. Retrying in ${this.retryDelay}ms (${retries}/${this.maxRetries})`);
await setTimeout(this.retryDelay);
// Exponential backoff
this.retryDelay *= 2;
} else {
throw error;
}
}
}
}
}
// Usage
const client = new RateLimitedOpenAI('your-api-key', OPENAI_URL);
const response = await client.createChatCompletion({
model: 'your-preferred-model',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
});Model Selection Strategy
Implement a strategy to select the best model based on the task:
function selectModel(task: 'general' | 'code' | 'creative' | 'analysis') {
switch (task) {
case 'general':
return 'general-purpose-model';
case 'code':
return 'code-specialized-model';
case 'creative':
return 'creative-writing-model';
case 'analysis':
return 'data-analysis-model';
default:
return 'default-model';
}
}
async function smartCompletion(prompt: string, task: 'general' | 'code' | 'creative' | 'analysis') {
const model = selectModel(task);
const completion = await openai.chat.completions.create({
model,
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: prompt }
],
});
return completion.choices[0].message.content;
}These advanced techniques will help you build more sophisticated applications with the Unreal OpenAI API.