gemini-2.5-pro-preview

Neural Network

Beep-boop, writing text for you...
pc1pc2pc3pc4
Main

/

Models

/

gemini-2.5-pro-preview
65 536

Max answer length

(in tokens)

1 048 576

Context size

(in tokens)

1,25 $

Prompt cost

(per 1M tokens)

10 $

Answer cost

(per 1M tokens)

0,01 $

Image prompt

(per 1K tokens)

Overview
Providers
API
bothub
BotHub: Try GPT Chat for Freebot

Caps remaining: 0 CAPS
Providers gemini-2.5-pro-previewOn Bothub, you can select your own providers for requests. If you haven't made a selection, we will automatically find suitable providers who can handle the size and parameters of your request.
Code example and API for gemini-2.5-pro-previewWe offer full access to the OpenAI API through our service. All our endpoints fully comply with OpenAI endpoints and can be used both with plugins and when developing your own software through the SDK.Create API key
Javascript
Python
Curl
import OpenAI from 'openai';
const openai = new OpenAI({
  apiKey: '<your bothub access token>',
  baseURL: 'https://bothub.chat/api/v2/openai/v1'
});


// Sync - Text generation 

async function main() {
  const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'gemini-2.5-pro-preview',
  });
} 

// Async - Text generation 

async function main() {
  const stream = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'gemini-2.5-pro-preview',
    stream: true
  });

  for await (const chunk of stream) {
    const part: string | null = chunk.choices[0].delta?.content ?? null;
  }
} 
main();

How it works gemini-2.5-pro-preview?

Bothubs gather information...empty