o1-mini Neural Network
Beep-boop, writing text for you...



bothub.chat/o1-mini
65 536
Max. Response Length
(in tokens)
128 000
Context Size
(in tokens)
1,1 USD
Prompt Cost
(per 1M tokens)
4,4 USD
Response Cost
(per 1M tokens)

BotHub: Try for freebot
Code example and API for o1-miniWe offer full access to the OpenAI API through our service. All our endpoints fully comply with OpenAI endpoints and can be used both with plugins and when developing your own software through the SDK.Create API key
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: '<your bothub access token>',
baseURL: 'https://bothub.chat/api/v2/openai/v1'
});
// Sync - Text generation
async function main() {
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'o1-mini',
});
}
// Async - Text generation
async function main() {
const stream = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'o1-mini',
stream: true
});
for await (const chunk of stream) {
const part: string | null = chunk.choices[0].delta?.content ?? null;
}
}
main();
How it works o1-mini?
Bothubs gather information...