Skip to content

How do I get the total token usage after a chat completion? #2463

Discussion options

You must be logged in to vote

Yes, you can get detailed token usage directly from the response!

The openai.ChatCompletion.create() response includes a .usage object with:

  • prompt_tokens
  • completion_tokens
  • total_tokens

Just do this:

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Tell me a joke"}]
)

print(response.usage)

Output will look like:

{
  'prompt_tokens': 7,
  'completion_tokens': 12,
  'total_tokens': 19
}

This is super helpful for tracking costs or usage limits.

✅ Let me know if it helps, and feel free to mark as Answer!

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Istituto-freudinttheprodev
Comment options

Answer selected by MatteoMgr2008
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants