AI Token Counter & Tokenizer
Tokens are the units LLMs use to process text — roughly 1 token per 4 characters in English, or about 0.75 words. Knowing your token count matters because API pricing is per-token, and every model has a context window limit. This tool gives you that number before you spend a cent.
Supported models
- OpenAI — GPT-5.4, o3, and o4-mini. Uses the o200k_base tokenizer, so counts are highly accurate for all three.
- Anthropic — Claude Opus 4.6, Sonnet 4.6, and Haiku 4.5. Tokenized with the official Claude tokenizer for reliable counts.
- Google — Gemini 3.1 Pro (1M context) and Gemini 3 Flash. Token counts are estimates since Google doesn't publish a client-side tokenizer.
- xAI — Grok 4 and Grok 4.1 Fast (2M context). Estimated via the o200k_base encoding.
- DeepSeek — V3.2 and R1. Helpful for comparing DeepSeek's low per-token pricing against other providers.
- Meta — Llama 4 Maverick (1M context) and Scout (10M context). Estimates based on the Claude tokenizer.
What you get
Paste or type your prompt and the token count updates as you go. You also see a cost breakdown for input and output tokens at current API rates, plus a bar showing how much of the model's context window you're using.
14 models across 6 providers. Free, no signup, works offline once loaded.