Simple, transparent pricing
Choose the plan that's right for you
LLM Pricing Comparison
Model Size | Price per 1M Tokens (Input) | Price per 1M Tokens (Output) | Features | Best For |
---|
Small (1B - 4B parameters) | $0.150 | $0.600 | - Fast inference
- Low latency
- Efficient fine-tuning
| - Chatbots
- Text classification
- Sentiment analysis
|
Medium (4B - 9B parameters) | $0.250 | $1.000 | - Balanced performance
- Improved context understanding
- Enhanced language generation
| - Content generation
- Summarization
- Question answering
|
* Prices are for illustration purposes. Actual pricing may vary based on usage, specific model configurations, and customizations.
Understanding Token-Based Pricing
Our token-based pricing model ensures you only pay for what you use. Here's how it works:
- Tokens are pieces of text, typically 4 characters long for English text.
- Input tokens are counted when you send requests to the model.
- Output tokens are counted in the model's responses.
- Pricing is calculated separately for input and output tokens, allowing for more precise cost control.
Benefits of token-based pricing:
- Cost-effective: Pay only for the processing you need.
- Scalable: Easily adjust usage based on your needs.
- Transparent: Clear understanding of costs associated with each interaction.