Skip to content

Conversation

jsonbailey
Copy link
Contributor

@jsonbailey jsonbailey commented Oct 3, 2025

Note

Switches chat to a concrete TrackedChat composed with an AIProvider, adds generic trackMetricsOf and LDAIMetrics, wires LDLogger, and updates factory/exports/tests.

  • Chat architecture:
    • Replace BaseTrackedChat/ProviderTrackedChat with concrete TrackedChat and new ChatResponse in api/chat/types.
    • Introduce AIProvider base class and update TrackedChatFactory to create providers (LangChain) and return TrackedChat.
    • Update api/chat/index and api/index exports accordingly.
  • Metrics/Tracking:
    • Add LDAIMetrics and LDAIConfigTracker.trackMetricsOf for generic success/token tracking.
    • Integrate metrics extraction in TrackedChat.invoke (uses trackMetricsOf).
  • Client/SDK wiring:
    • LDClientMin now exposes optional logger.
    • LDAIClientImpl.initChat returns TrackedChat | undefined, passes LDLogger to factory, and logs when disabled.
  • Tests:
    • Add tests for trackMetricsOf and new TrackedChat behavior (message appends, retrieval, invoke integration).

Written by Cursor Bugbot for commit 1ca87fd. This will update automatically on new commits. Configure here.

@jsonbailey jsonbailey changed the title Move to concrete TrackedChat with AIProvider via composition feat: Move to concrete TrackedChat with AIProvider via composition Oct 3, 2025
@jsonbailey jsonbailey marked this pull request as ready for review October 3, 2025 15:47
@jsonbailey jsonbailey requested a review from a team as a code owner October 3, 2025 15:47

return result;
} catch (err) {
this.trackError();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to track an error in this case? The way I interpret it, we track errors coming from the AI model, not necessarily errors caused by our system. If there's an error thrown here it's likely due to an issue outside of the AI model errors/successes, and it may crowd the error count's intention.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eg as well - what if trackTokens throws an error? That means we've already tracked success/error, and then we're tracking another error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could have the try catch only wrap the trackDurationOf method. The SDK methods should not throw exceptions so it should only be the user provided function call (AI Model) that is capable of throwing inside the try catch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should have trackError() in the fallthrough for trackDurationOf either.

This might be my misunderstanding, but when I think of these successes/errors, I still don't think of it as successes/errors attributed to the LD SDK (including any errors that come from trackDurationOf()), rather explicit errors from the metrics themselves. In some way, we may be overloading what an "error" represents in our metrics with this, basically. This comes off as more of an internal error (a "failure") rather than an AI error.

I see it's used similarly in the trackOpenAIMetrics though, so maybe it's precedent here. That said, I see we're throwing the err here now if the trackDurationOf() throws an error. I think if we throw an error, we don't need the trackError() call.

We can sync on this if needed, sorry if this seems nitpicky!

Comment on lines 43 to 44
// Try LangChain as fallback
return this._createLangChainProvider(aiConfig);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is falling back to LangChain the long term vision for this? I think it makes more sense to need to specify LangChain if that's the user's intent, otherwise fallback to error/undefined with "no valid provider specified" or similar.

Copy link
Contributor Author

@jsonbailey jsonbailey Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach was to target specific providers if they were installed first, then to try the general providers. If a user only installs the LangChain provider, that will be the only one used. Using the word "fallback" wasn't a good choice here and I have adjusted it. The goal is to be a hands off approach where you can just init an AI Config and it will auto determine the correct provider.

Andrew and I talked about having a preferred provider that you could specify in the init but I held off on the initial version so we can get some feedback.

@abarker-launchdarkly
Copy link
Contributor

@jsonbailey Is there a plan to maintain backwards compatibility for anyone using AI configs using the currently published documentation?

cursor[bot]

This comment was marked as outdated.

@jsonbailey
Copy link
Contributor Author

This should not cause any breaking changes for current ai config users. This adds a new initialization pathway but does not remove any existing functionality. Eventually we will plan to deprecate the openai / bedrock / vercel methods in the tracker but they are not removed at this time. I want to get some feedback on the new work before making that decision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants