o3-mini

Neural Network

GPT o3-mini is a neural network from OpenAI, released on January 31, 2025. It is focused on tasks in programming, mathematics, and analytics. Users receive fast, accurate answers even in complex areas, saving time and simplifying the decision-making process.

Main

/

Models

/

o3-mini
100 000

Max answer length

(in tokens)

200 000

Context size

(in tokens)

1,24 $

Prompt cost

(per 1M tokens)

4,95 $

Answer cost

(per 1M tokens)

0 $

Image prompt

(per 1K tokens)

*Prices for using the API.
Обзор
Провайдеры
API
bothub
BotHub: Попробуйте чат GPT бесплатноbot

Осталось Caps: 0 CAPS
Providers o3-miniOn Bothub, you can select your own providers for requests. If you haven't made a selection, we will automatically find suitable providers who can handle the size and parameters of your request.
Пример кода и API для o3-miniМы предлагаем полный доступ к API OpenAI через наш сервис. Все наши конечные точки полностью соответствуют конечным точкам OpenAI, их можно использовать как с плагинами, так и при разработке собственного программного обеспечения через SDK.Создать API ключ
Javascript
Python
Curl
import OpenAI from 'openai';
const openai = new OpenAI({
  apiKey: '<your bothub access token>',
  baseURL: 'https://openai.bothub.chat/v1'
});


// Sync - Text generation 

async function main() {
  const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'o3-mini',
  });
} 

// Async - Text generation 

async function main() {
  const stream = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'o3-mini',
    stream: true
  });

  for await (const chunk of stream) {
    const part: string | null = chunk.choices[0].delta?.content ?? null;
  }
} 
main();
illustaration

How it works o3-mini?

The key advantages of GPT o3-mini include a 200,000-token context for in-depth dialogues, three levels of reasoning (low, medium, high), and an affordable cost compared to GPT-4o. In tests, the model is 24% faster than o1-mini and demonstrates high accuracy in mathematical and scientific tasks. Its built-in JSON handling facilitates automation, and its function calling feature simplifies application integration. Ultimately, users save resources, resolve complex tasks quickly, and enhance the quality of their projects.