jsontoschema

How many tokens does your JSON cost?

Paste any JSON payload and see the token count. Then convert to a schema and see how much you save.

Paste JSON
Raw JSON140tokens
Schema56tokens
60% smallerGPT-4o tokenizer

Why token count matters

Every LLM API charges by the token. Every model has a context window limit. When you include JSON in a prompt, you're spending tokens on brackets, quotes, repeated keys, and data values that the model may not need.

Knowing the token cost of your JSON helps you make informed decisions about what to include in prompts, how to structure system messages, and when to summarize vs. include raw data.

Real-world examples

~150

tokens for a single user object

~15,000

tokens for 100 user objects

~80

tokens for the schema of those users

99%

reduction on large arrays

How it works

The jsontoschema tool shows a token comparison bar at the top of the output: your original JSON's token count, the schema's token count, and the percentage reduction.

Token counts are computed using the GPT-4o tokenizer (o200k_base encoding), running entirely in your browser. The numbers reflect real tokenization, not a rough estimate.

Optimizing your prompts

Use schemas in system prompts

Replace full JSON examples with schemas. The model understands the structure equally well, and your system prompt stays within context limits.

Keep data for analysis tasks

When the LLM needs to aggregate, filter, or reason about values, include the data. When it needs to generate code or docs, use the schema.

Need the full converter?

The main tool shows schema and TypeScript output alongside token counts.