OpenAI tokens and pricing | HCLTech

OpenAI tokens and pricing

This blog aims to provide insights into OpenAI tokens, including their calculation and processing charges, while using the APIs provided by OpenAI.
3 minutes read
Naveen Kumar Jain


Naveen Kumar Jain
Group Technical Manager
3 minutes read
OpenAI tokens and pricing

OpenAI is a private research laboratory founded in 2015 with an aim of creating, developing, directing and training AI that benefits civilization, addressing several aspects of the everyday life problems or areas where the help is required.

Initially focused on developing AI and ML tools for video games and other recreational purposes, OpenAI later shifted its focus to general AI development and research, resulting in creation and training of AI models like GPT, DALL-E, Whisper, Codex and others.

Although originally a separate entity, Microsoft expressed interest in partnering with OpenAI and onboarded it onto its Cloud Platform as an enterprise SaaS product offering, known as Azure OpenAI, in 2019.

Two versions of OpenAI are available in the AI market:

  1. OpenAI Official available on
  2. Azure OpenAI available on

This blog aims to provide insights into OpenAI tokens, including their calculation and processing charges, while using the APIs provided by OpenAI. The blog will leverage GenAI to get the answers of the given context like generate code, image, conduct Q&A and summarize the provided context facilitating a better understanding of OpenAI’s offering and functionality.

Tokens in OpenAI

Tokens in OpenAI represent the numerical encoding of sequence of characters or words utilized by the models in the user prompts and in system responses. These sequences undergo conversion using the Byte Pair Encoding (BPE) technique.

The AI models at the core of OpenAI do not perceive or understand the text the way humans understand. Instead, they interpret sequences of numbers referred to as tokens generated through BPE.

Tokens can be treated as pieces of words, with prompt (compromising words or sentences) converted into tokens before the API processing begins. Tokens differ from word slices as they may include trailing spaces and even the sub-words. Splitting of words is a language-dependent process and the number of tokens may vary accordingly. To better understand the tokens in terms of length, some basic rules apply:

1 token ~= 4 chars in English
1 token ~= ¾ words
100 tokens ~= 75 words


1-2 sentence ~= 30 tokens
1 paragraph ~= 100 tokens
1,500 words ~= 2048 tokens

Tiktoken refers to the library utilized by OpenAI generating text for tokens. Some of the important characteristics of Tiktoken include:

  1. Tokens are reversible and lossless. Hence, they can be converted into the initial state.
  2. Tiktoken works on random text even if it is not in the training data of tokeniser.
  3. Tiktoken compacts the text and the token arrangement is shorter than the bytes equivalent to the initial text. On average, each token relates to about four bytes.
  4. Tiktoken enables the model to see mutual sub-words. For instance, "ing" is a common sub-word in English, so BPE encodings will often split "encoding" into tokens like "encod" and "ing" (instead of, e.g. "enc" and "oding"). Because the model will then see the "ing" token again and again in changed perspectives, it helps models generalize and better know the grammar.

Example of token calculations (Ref:

Text: Hello, my name is Naveen Jain.
Tokens: [9906, 856, 836, 374, 452, 525, 268, 96217, 13]
Tokens 8 Characters 28
Text: In this paper we are discussing on OpenAI tokens and prices.
Tokens: [644, 420, 5684, 584, 527, 25394, 389, 5377, 15836, 11460, 323, 7729, 13]
Tokens 13 Characters 60

OpenAI pricing factor

OpenAI pricing is based on the type of model APIs that are requested to invoke to fulfil the requirement. Below is the publicly available pricing information for OpenAI and OpenAI models; we have enlisted a few models, but not all:

Language models Azure OpenAI OpenAI
Models Context Prompt (Per 1000 Tokens) Completion (Per 1000 Tokens) Prompt (Per 1000 Tokens) Completion (Per 1000 Tokens)
GPT-3.5-Turbo 4K $0.0015 $0.0020 $0.0015 $0.0020
GPT-3.5-Turbo 16K $0.0030 $0.0040 $0.0010 $0.0020
GPT-4 8K $0.0300 $0.0600 $0.0300 $0.0600
GPT-4 32K $0.0600 $0.1200 $0.0600 $0.1200
Image models Azure OpenAI OpenAI
Models Per 100 images Standard Per 100 images Standard
Dall-E $2 $2
Embedding models Azure OpenAI OpenAI
Models Per 1,000 tokens Standard Per 1,000 tokens Standard
Ada $0.0001 $0.0001

How to calculate pricing for OpenAI API calls

The calculation of pricing per API call is driven by the model to serve the incoming request to provide the response back from the system. Prices consist of prompt (i.e., user’s request to the system) + completion (i.e., system response to the user’s request).

Example to understand the prices:

  • Model: GPT-4 (8K)
  • Prices for prompt: $0.0300/1000 Tokens
  • Prices for completion: $0.0600/1000 Tokens
  • No. of tokens in prompt: 100
  • No. of tokens in completion: 450

Hence the calculation is as follows:
((token_in_prompt * RATE_PROMPT) + (token_in_completion * RATE_COMPLETION)) / 1000

After substituting the corresponding values to the above formula, the cost would be:
((100 * $0.0300) + (450 * $0.0600))/1000 = $0.0300

Hence, we can easily understand the pricing for API calls where the token comes in picture.


This paper gives fair information about the high-level theory behind the tokens in OpenAI and the way of calculation to understand the prices or cost per API call.

tiktoken · PyPI

Share On