Uniform improved client= (direct & global) option for Python and JS#121
Uniform improved client= (direct & global) option for Python and JS#121
client= (direct & global) option for Python and JS#121Conversation
Discovered that the client.openai should have been the openai instance. Also, that the wrapping of the client should not be automatic as the attached methods may not be the right ones. i.e. if the client.openai is unwrapped, then we wrap in the prepare_openai the attached methods in the client.complete would be the unwrapped ones vs. the new wrapped one.
Simplifies the interface and the code. Need to update the SDK to expose a `unwrap()` method to avoid the hacks.
avoids hacky lookup
| has_customization = self.complete is not None or self.embed is not None or self.moderation is not None | ||
|
|
||
| # avoid wrapping if we have custom methods (the user may intend not to wrap) | ||
| if not has_customization and not isinstance(self.openai, NamedWrapper): |
There was a problem hiding this comment.
This caught me off guard when I was reproducing the Azure problems. Now we avoid mix matching objects like:
openai_obj = openai.OpenAI(...)
client = LLMClient(
openai=openai_obj,
complete=openai_obj.chat.completions.create,
...
)Previously the openai_obj would be wrapped and the rest of the code would assume that the methods would be wrapped too.
Braintrust eval reportAutoevals (bra-2135--azure-openai-autoevals-1742245432)
|
02b17e4 to
beb5345
Compare
beb5345 to
24b75dc
Compare
luckily discovered this while trying to run an Azure example
js/llm.test.ts
Outdated
| } from "./llm.fixtures"; | ||
|
|
||
| beforeAll(() => { | ||
| init( |
There was a problem hiding this comment.
note the use of new init()
| }) => Promise<void>, | ||
| ) => { | ||
| const createSpy = jest.fn(); | ||
| const wrapperMock = (client: any) => { |
There was a problem hiding this comment.
avoids having to add braintrust as a dependency
There was a problem hiding this comment.
is there a reason this is worth avoiding? seems like we would want to pull it in at some point as tests cover new surface area introduced in the sdk?
There was a problem hiding this comment.
looks like you added it for python too so for the sake of consistent access to sdk might it be worth adding to the js dependencies as well?
| @@ -0,0 +1,17 @@ | |||
| import { setupServer } from "msw/node"; | |||
There was a problem hiding this comment.
todo go back to our previous tests and adopt msw in them (separate PR)
| embed=openai.embeddings.create, | ||
| moderation=openai.moderations.create, | ||
| client = openai.OpenAI() # Configure with your settings | ||
| llm = LLMClient(openai=client) # Methods will be auto-configured |
There was a problem hiding this comment.
the interface is similar but not the same as the one in JS.
for exact same we would need to support passing exact methods in JS, but I'm betting that'll be an unlikely use case.
We should support this, though:
init(openai.OpenAI())| const [expectedEntities, contextEntities] = responses.map(mustParseArgs); | ||
|
|
||
| const score = await ListContains({ | ||
| ...extractOpenAIArgs(args), |
There was a problem hiding this comment.
wasn't needed (as far as I can tell)
| @@ -1,145 +0,0 @@ | |||
| import { ChatCompletionMessageParam } from "openai/resources"; | |||
There was a problem hiding this comment.
just got moved to js/llm.test.ts (not 100% sure why it needed to be here)
setup.py
Outdated
| extras_require = { | ||
| "dev": [ | ||
| "black", | ||
| "braintrust", |
There was a problem hiding this comment.
todo whether this is possible, or I need to workaround this
client= (direct & global) option for Python and JS
choochootrain
left a comment
There was a problem hiding this comment.
seems legit but will defer to experts for rubber stamp
| }) => Promise<void>, | ||
| ) => { | ||
| const createSpy = jest.fn(); | ||
| const wrapperMock = (client: any) => { |
There was a problem hiding this comment.
is there a reason this is worth avoiding? seems like we would want to pull it in at some point as tests cover new surface area introduced in the sdk?
| }) => Promise<void>, | ||
| ) => { | ||
| const createSpy = jest.fn(); | ||
| const wrapperMock = (client: any) => { |
There was a problem hiding this comment.
looks like you added it for python too so for the sake of consistent access to sdk might it be worth adding to the js dependencies as well?
Braintrust eval report
|
For TS
New
clientargument to OpenAIClassifiers as well as the globalinit()method. Similar to how we have in Python (but maybe a little simpler).For Python
Previously:
Would break because the
prepare_openaiwould wrapclient.openaibut leave behind thecompleteas the unwrapped complete.Now if you pass customizations we will avoid automatic wrapping to avoid this problem. Users can, however, wrap like so and everything will be wrapped correctly:
Further, because we've tighten things up, we can now automatically configure the client reducing the code supplied by the user to something like:
Before they had to:
Also, discovered that the client.openai should have been the openai instance. The only time this is not the case is for v0 support of the OpenAI library.