Best of the week

More neural networks on Bothub

Claude-3.7 Sonnet
Claude-3.7 Sonnet
Model from Anthropic, providing enhanced reasoning and adaptability capabilities with the innovative 'extended thinking mode' for flexible choice between quick responses and deep analysis. The model supports a context window of up to 200,000 tokens and a maximum output of 128,000 tokens.
o4 Mini High
o4 Mini High
The o4-mini model with a high level of reasoning_effort for thorough reasoning. Combines speed and multimodality with accuracy in STEM and visual tasks in a context of 200K tokens.
Midjourney v7
Midjourney v7
Updated image generator with improved detail, especially for skin and hair, more realistic lighting and reflections. The model creates more dynamic and diverse scenes, moving away from standard stock photo-style images.
GPT-4.1
GPT-4.1
Model for programming and precise instruction execution with a context of up to 1 million tokens. Surpasses GPT-4o in coding (54.6% on SWE-bench) and instruction following (improvement by 10.5%).
Flux-1.1 Pro Ultra
Flux-1.1 Pro Ultra
Enhanced version of the image generation model with support for 4 times higher resolution (up to 4 MP), maintaining a generation speed of 10 seconds per image. The model offers a 'raw mode' for creating more natural images.
Gemini-2.5 Pro Preview
Gemini-2.5 Pro Preview
Google's model capable of 'thinking' before answering for greater accuracy and performance. Leader on the LMArena platform with advanced capabilities in reasoning, coding, and multimodality (text, audio, images, video).
models-page.best-models.link

Available neural network models

ELITE
Show cost in Caps
Cost in Caps
ModelContext size (in tokens)Output size (in tokens)Prompt (per 1 token)Image prompt (per 1K tokens)Response (per 1 token)
gpt-4.1-nano1 047 57632 7680,0700,3
gpt-4o-mini128 00016 3840,11162,750,45
gpt-4.11 047 57632 7681,506
gpt-4o128 0004 0961,882 709,757,5
gpt-4.1-mini1 047 57632 7680,301,2
gpt-4.5-preview128 00016 38456,2581 281,25112,5
o3200 000100 0001,51 147,56
o1200 000100 00011,2516 256,2545
* Our markup on these prices is 5%, which is included in the cost of packages except Basic (Premium and higher)

LLM Request

Cost of a single request in the dashboard
All tariffs
Used tokens + 0.01 USDper 1 request
Special attention: The use of Easy Writer is charged differently. For each text generation, Easy Writer charges an additional 0.1 USD per request + the token cost as specified above for a regular LLM request.

Image Generation

Cost of a single generation by models
MidJourney — Relax
0,03 USD / 20000 CAPSFor 1 generation
MidJourney — Fast
0,06 USD / 40000 CAPSFor 1 generation
MidJourney — Turbo
0,12 USD / 80000 CAPSFor 1 generation
Dall-E
0,03 USD / 20000 CAPSFor 1 generation
Flux
0 USD / 1666 CAPSFor 1 generation
Stable Diffusion
0,04 USD / 26250 CAPSFor 1 generation
GPT Image - Square
0,01 USD / 8160 CAPSFor 1 generation
GPT Image - Portrait
0,02 USD / 12240 CAPSFor 1 generation
GPT Image - Landscape
0,02 USD / 12000 CAPSFor 1 generation

Web Search

Cost of a single web search usage
All tariffs
Used tokens + 0.01 USDper 1 request
Link Analysis
0,01 Capsfor 1 character

Video Generation

Cost of creating one second of video
GoogleVeo — Veo-2
300000 Caps / 0.45 $per 1 second
Runway
30000 Caps / 0.04 $per 1 second

Transcription

Cost of transcription per minute
TTS
7,500 Caps / 0.01 $per 1,000 characters
TTS HD
15,000 Caps / 0.02 $per 1,000 characters

Transcription

The cost of one transcription
AssemblyAI — nano
2,000 Caps / 0.003 $Per 1 minute
AssemblyAI — best
5,500 Caps / 0.008 $Per 1 minute

Embeddings

Model embeddings available through our API.
Cost in CapsCost in dollars
ModelEmbedding dimensionPrompt cost (per 1 token)Prompt cost (per 100,000 tokens)
text-embedding-3-largeThe most efficient embedding model
3 0720,10,13
text-embedding-3-smallIncreased performance compared to the 2nd generation ada embedding model
1 5360,010,02
text-embedding-ada-002The most powerful 2nd generation embedding model, replacing 16 first generation models
1 5360,070,1
text-embedding-3-largeThe most efficient embedding model
3 072Embedding dimension
0,1Prompt cost (per 100,000 tokens)
0,13Prompt cost (per 100,000 tokens)
text-embedding-3-smallIncreased performance compared to the 2nd generation ada embedding model
1 536Embedding dimension
0,01Prompt cost (per 100,000 tokens)
0,02Prompt cost (per 100,000 tokens)
text-embedding-ada-002The most powerful 2nd generation embedding model, replacing 16 first generation models
1 536Embedding dimension
0,07Prompt cost (per 100,000 tokens)
0,1Prompt cost (per 100,000 tokens)

What are Caps?

Caps is the internal currency of the service, used to measure the cost of requests and responses of neural networks. It is fixed and depends on the model complexity: number of parameters, multimodality, and overall power.

    For example:
  • ChatGPT-3.5 — ~1 Caps per token
  • ChatGPT o1-Pro — ~400+ Caps per token
The higher your tariff, the better the price: 1 million Caps is cheaper on Elite than on Basic.

Still have questions?

What are tokens?

Tokens are units of text processing by the neural network, representing parts of words, entire words, or punctuation marks that determine the cost of requests.

How long will 1 million tokens last?

One million tokens of the GPT-4o model are enough to rewrite “The Brothers Karamazov” by F. M. Dostoevsky.

What to do if I run out of tokens?

Purchase additional Caps in your personal account — https://bothub.chat/profile

Why does the neural network pretend to be another?

The neural network does not know what model it is if it is not specified in the system prompt. The “self-identification” of the model without instruction is influenced by many factors, one of them being the model's data training set.

What is context in a neural network?

Context is the amount of information that the neural network retains in memory during a dialogue, affecting the coherence of responses and understanding of previous requests.

What is the context of different neural network models?

GPT o1 Pro and Claude 3.7 Sonnet support up to 200K tokens, Gemini 2.5 Pro works with 1KK, while Gemini 2.0 Pro supports up to 2KK tokens.

What file formats do models read?

Neural networks process TXT, PDF, DOCX, XLSX, CSV, JSON, XML, HTML, as well as images JPG, PNG, and audio files MP3, MP4.

Can neural networks be used for free?

There are free models with the postfix “:free” and “-exp” that can be used for free through a mini-window on the main page, as well as the model page.

How do neural network models differ from each other?

Models differ in the volume of training data, context size, processing speed, specialization in specific tasks, and ability to work with multimodal content.

How to use models via API?

To integrate models into your applications, you need to obtain an API key in your personal account. More details can be found here: https://bothub.chat/api/documentation/ru.

Can neural networks be used to automate business processes?

Neural networks effectively automate routine tasks of document management, data processing, customer support, and analytics, integrating with existing business systems via API.

Chat with us on Telegram