Loading...
Paste text to instantly count tokens, words, and characters for any GPT model
Compare context windows, max output lengths, and API pricing across all major LLM providers.
| Model | Provider | Context Window | Max Output | Price (Input / Output per 1M tokens) |
|---|---|---|---|---|
| GPT-5.4 | OpenAI | 1M tokens | 32,768 tokens | See openai.com/pricing |
| GPT-5.4 mini | OpenAI | 1M tokens | 16,384 tokens | See openai.com/pricing |
| GPT-4.1 | OpenAI | 1M tokens | 32,768 tokens | See openai.com/pricing |
| GPT-4.1 mini | OpenAI | 1M tokens | 16,384 tokens | See openai.com/pricing |
| o3 | OpenAI | 200K tokens | 100,000 tokens | $0.40 / $1.60 |
| o4-mini | OpenAI | 200K tokens | 100,000 tokens | $1.10 / $4.40 |
| Claude Opus 4.6 | Anthropic | 1M tokens | 32,000 tokens | $5.00 / $25.00 |
| Claude Sonnet 4.6 | Anthropic | 1M tokens | 8,192 tokens | $3.00 / $15.00 |
| Gemini 2.5 Pro | 1M tokens | 65,536 tokens | $1.25 / $10.00 | |
| Gemini 2.5 Flash | 1M tokens | 65,536 tokens | $0.15 / $0.60 | |
| Gemini 3.1 Pro | 1M tokens | 64,000 tokens | $2.00 / $12.00 | |
| Gemini 3.1 Flash Lite | 1M tokens | 64,000 tokens | $0.25 / $1.50 |
Prices from official provider pricing pages as of March 2026. Some prices shown as "See pricing page" due to frequent updates — check openai.com/pricing, docs.anthropic.com/pricing, and ai.google.dev/pricing for latest rates. Note: GPT-4o was retired February 2026; GPT-4.1 and GPT-5.4 are the current models. Token estimates use average ratios (~0.75 tokens/word for OpenAI/Google, ~0.80 for Anthropic).
Approximately 750 tokens for OpenAI models (GPT-5.4, GPT-4.1, o3) and 800 tokens for Anthropic models (Claude). The exact count varies based on word length, punctuation, and special characters.
GPT-5.4, GPT-4.1, Claude Opus 4.6, and Gemini 2.5 Pro all support 1 million tokens of context. The o3 and o4-mini reasoning models support 200,000 tokens. 1 million tokens is roughly equivalent to 750,000 words or a 1,500-page book.
ChatGPT has a fixed context window. When your conversation exceeds the token limit, older messages are dropped. Use ChatGPT Toolbox to search and export important conversations before they're lost.
Be concise in prompts, avoid repeating context, use system instructions for persistent rules, and break long tasks into separate conversations. ChatGPT Toolbox's Prompt Library helps you save optimized prompts for reuse.
This tool provides estimates based on average token-per-word ratios. For exact counts, use OpenAI's tiktoken library (Python) or the Anthropic tokenizer API. Our estimates are within 5-10% of actual counts for English text.
Add folders, search, export, and prompt management to ChatGPT — free to start.