-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Open
Labels
Description
Confirm this is an issue with the Python library and not an underlying OpenAI API
- This is an issue with the Python library
Describe the bug
Description:
The results returned by the moderation endpoint do not align with the expected schema.
Details:
- In the
Moderation.categories
field (of typeCategories
), all fields are annotated asbool
. However, when the moderation endpoint is called, the fieldsillicit
andillicit_violent
returnNone
values instead ofTrue
orFalse
. - The same issue occurs with the
category_scores
field (of typeCategoryScores
), where all fields are expected to befloat
. Yet,illicit
andillicit_violent
are also returned asNone
.
Expected Behavior:
- If the
None
values forillicit
andillicit_violent
are expected behavior, these fields should be annotated as optional in the schema. - If this is not expected behavior, the API should be corrected to ensure that these fields return appropriate boolean or float values.
Additional Notes:
It is surprising that Pydantic does not throw an error for these mismatches and allows None
values to be returned. I could not manually create a Categories
object with any None
value in it.
To Reproduce
Run the moderation endpoint and check response.results[0]
categories and category_scores -> illicit field.
Result I'm getting:
response.results: [
Moderation(
categories=Categories(
harassment=False,
harassment_threatening=False,
hate=False,
hate_threatening=False,
illicit=None,
illicit_violent=None,
self_harm=False,
self_harm_instructions=False,
self_harm_intent=False,
sexual=False,
sexual_minors=False,
violence=False,
violence_graphic=False,
self-harm=False,
sexual/minors=False,
hate/threatening=False,
violence/graphic=False,
self-harm/intent=False,
self-harm/instructions=False,
harassment/threatening=False,
),
category_applied_input_types=None,
category_scores=CategoryScores(
harassment=0.000255020015174523,
harassment_threatening=1.3588138244813308e-05,
hate=2.8068381652701646e-05,
hate_threatening=1.0663524108167621e-06,
illicit=None,
illicit_violent=None,
self_harm=9.841909195529297e-05,
self_harm_instructions=7.693658517382573e-06,
self_harm_intent=7.031533459667116e-05,
sexual=0.013590452261269093,
sexual_minors=0.0031673426274210215,
violence=0.00022930897830519825,
violence_graphic=4.927426198264584e-05,
self-harm=9.841909195529297e-05,
sexual/minors=0.0031673426274210215,
hate/threatening=1.0663524108167621e-06,
violence/graphic=4.927426198264584e-05,
self-harm/intent=7.031533459667116e-05,
self-harm/instructions=7.693658517382573e-06,
harassment/threatening=1.3588138244813308e-05,
),
flagged=False,
),
]
Code snippets
import openai
client = openai.OpenAI()
response = client.moderations.create(input="text")
print(response.results[0])
OS
macOS
Python version
Python 3.12.4
Library version
openai 1.51.2
poneill, tongbaojia, bradddd, rock619, TmLev and 1 more