How do I get the total token usage after a chat completion? #2463
Answered
by
Istituto-freudinttheprodev
MatteoMgr2008
asked this question in
Q&A
-
I'm using the It works fine, but I’d like to see how many tokens are used for:
Is there a way to get that info from the response? Here's my code: response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell me a joke"}]
) Thanks! |
Beta Was this translation helpful? Give feedback.
Answered by
Istituto-freudinttheprodev
Jul 14, 2025
Replies: 1 comment 1 reply
-
Yes, you can get detailed token usage directly from the response! The
Just do this: response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell me a joke"}]
)
print(response.usage) Output will look like: {
'prompt_tokens': 7,
'completion_tokens': 12,
'total_tokens': 19
} This is super helpful for tracking costs or usage limits. ✅ Let me know if it helps, and feel free to mark as Answer! |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
MatteoMgr2008
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Yes, you can get detailed token usage directly from the response!
The
openai.ChatCompletion.create()
response includes a.usage
object with:prompt_tokens
completion_tokens
total_tokens
Just do this:
Output will look like:
This is super helpful for tracking costs or usage limits.
✅ Let me know if it helps, and feel free to mark as Answer!