Skip to content

Uniform improved client= (direct & global) option for Python and JS#121

Merged
ibolmo merged 41 commits intomainfrom
bra-2135--azure-openai-autoevals
Mar 17, 2025
Merged

Uniform improved client= (direct & global) option for Python and JS#121
ibolmo merged 41 commits intomainfrom
bra-2135--azure-openai-autoevals

Conversation

@ibolmo
Copy link
Collaborator

@ibolmo ibolmo commented Mar 4, 2025

For TS

New client argument to OpenAIClassifiers as well as the global init() method. Similar to how we have in Python (but maybe a little simpler).

For Python

Previously:

openai_obj = openia.OpenAI(...)
client = LLMCLient(
  openai=openai_obj,
  complete=openai_obj.chat.completions.create,
  ...
)

Would break because the prepare_openai would wrap client.openai but leave behind the complete as the unwrapped complete.

Now if you pass customizations we will avoid automatic wrapping to avoid this problem. Users can, however, wrap like so and everything will be wrapped correctly:

openai_obj = wrap_openai(openai.OpenAI())
client = LLMClient(
  openai=openai_obj,
  complete=openai_obj.chat.completions.create,
)

Further, because we've tighten things up, we can now automatically configure the client reducing the code supplied by the user to something like:

init(openai.OpenAI(...))

Before they had to:

openai_obj = openai.OpenAI(...)

client = LLMClient(
  openai=openai_obj,
  complete=openai_obj.chat.completions.create,
  ...
)

init(client)

Also, discovered that the client.openai should have been the openai instance. The only time this is not the case is for v0 support of the OpenAI library.

ibolmo added 3 commits March 3, 2025 23:58
Discovered that the client.openai should have been the openai instance.

Also, that the wrapping of the client should not be automatic as the
attached methods may not be the right ones. i.e. if the client.openai is
unwrapped, then we wrap in the prepare_openai the attached methods in
the client.complete would be the unwrapped ones vs. the new wrapped one.
Simplifies the interface and the code.

Need to update the SDK to expose a `unwrap()` method to avoid the hacks.
@ibolmo ibolmo self-assigned this Mar 4, 2025
has_customization = self.complete is not None or self.embed is not None or self.moderation is not None

# avoid wrapping if we have custom methods (the user may intend not to wrap)
if not has_customization and not isinstance(self.openai, NamedWrapper):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This caught me off guard when I was reproducing the Azure problems. Now we avoid mix matching objects like:

openai_obj = openai.OpenAI(...)
client = LLMClient(
  openai=openai_obj,
  complete=openai_obj.chat.completions.create,
  ...
)

Previously the openai_obj would be wrapped and the rest of the code would assume that the methods would be wrapped too.

@github-actions
Copy link

github-actions bot commented Mar 4, 2025

Braintrust eval report

Autoevals (bra-2135--azure-openai-autoevals-1742245432)

Score Average Improvements Regressions
NumericDiff 75% (+0pp) - -
Start 1742245432.23s - -
End 1742245433.78s - -
Duration 1.51s (-0.08s) 100 🟢 -
Prompt_tokens 279.25tok (+0tok) - -
Completion_tokens 19.17tok (+0tok) - -
Total_tokens 298.42tok (+0tok) - -

@ibolmo ibolmo force-pushed the bra-2135--azure-openai-autoevals branch from 02b17e4 to beb5345 Compare March 4, 2025 16:44
@ibolmo ibolmo force-pushed the bra-2135--azure-openai-autoevals branch from beb5345 to 24b75dc Compare March 4, 2025 16:44
luckily discovered this while trying to run an Azure example
@ibolmo ibolmo marked this pull request as ready for review March 4, 2025 17:34
js/llm.test.ts Outdated
} from "./llm.fixtures";

beforeAll(() => {
init(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note the use of new init()

}) => Promise<void>,
) => {
const createSpy = jest.fn();
const wrapperMock = (client: any) => {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoids having to add braintrust as a dependency

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason this is worth avoiding? seems like we would want to pull it in at some point as tests cover new surface area introduced in the sdk?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like you added it for python too so for the sake of consistent access to sdk might it be worth adding to the js dependencies as well?

@@ -0,0 +1,17 @@
import { setupServer } from "msw/node";
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo go back to our previous tests and adopt msw in them (separate PR)

embed=openai.embeddings.create,
moderation=openai.moderations.create,
client = openai.OpenAI() # Configure with your settings
llm = LLMClient(openai=client) # Methods will be auto-configured
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the interface is similar but not the same as the one in JS.

for exact same we would need to support passing exact methods in JS, but I'm betting that'll be an unlikely use case.

We should support this, though:

init(openai.OpenAI())

const [expectedEntities, contextEntities] = responses.map(mustParseArgs);

const score = await ListContains({
...extractOpenAIArgs(args),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wasn't needed (as far as I can tell)

@@ -1,145 +0,0 @@
import { ChatCompletionMessageParam } from "openai/resources";
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just got moved to js/llm.test.ts (not 100% sure why it needed to be here)

setup.py Outdated
extras_require = {
"dev": [
"black",
"braintrust",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo whether this is possible, or I need to workaround this

@ibolmo ibolmo changed the title Improved LLMCLient and added test coverage for prepare_openai Uniform improved client= (direct & global) option for Python and JS Mar 5, 2025
Copy link
Contributor

@choochootrain choochootrain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems legit but will defer to experts for rubber stamp

}) => Promise<void>,
) => {
const createSpy = jest.fn();
const wrapperMock = (client: any) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason this is worth avoiding? seems like we would want to pull it in at some point as tests cover new surface area introduced in the sdk?

}) => Promise<void>,
) => {
const createSpy = jest.fn();
const wrapperMock = (client: any) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like you added it for python too so for the sake of consistent access to sdk might it be worth adding to the js dependencies as well?

@ibolmo ibolmo merged commit 01230f9 into main Mar 17, 2025
8 checks passed
@github-actions
Copy link

github-actions bot commented Mar 17, 2025

Braintrust eval report

Autoevals (main-1742246885)

Score Average Improvements Regressions
NumericDiff 75% (+0pp) - -
Start 1742246884.95s - -
End 1742246886.48s - -
Duration 1.49s (-0.02s) 78 🟢 22 🔴
Prompt_tokens 279.25tok (+0tok) - -
Completion_tokens 19.17tok (+0tok) - -
Total_tokens 298.42tok (+0tok) - -

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments