-
Notifications
You must be signed in to change notification settings - Fork 30
feat: Move to concrete TrackedChat with AIProvider via composition #940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: jb/sdk-1454/ai-sdk-tracked-chat
Are you sure you want to change the base?
feat: Move to concrete TrackedChat with AIProvider via composition #940
Conversation
|
||
return result; | ||
} catch (err) { | ||
this.trackError(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to track an error in this case? The way I interpret it, we track errors coming from the AI model, not necessarily errors caused by our system. If there's an error thrown here it's likely due to an issue outside of the AI model errors/successes, and it may crowd the error count's intention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eg as well - what if trackTokens
throws an error? That means we've already tracked success/error, and then we're tracking another error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could have the try catch only wrap the trackDurationOf method. The SDK methods should not throw exceptions so it should only be the user provided function call (AI Model) that is capable of throwing inside the try catch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should have trackError()
in the fallthrough for trackDurationOf
either.
This might be my misunderstanding, but when I think of these successes/errors, I still don't think of it as successes/errors attributed to the LD SDK (including any errors that come from trackDurationOf()
), rather explicit errors from the metrics themselves. In some way, we may be overloading what an "error" represents in our metrics with this, basically. This comes off as more of an internal error (a "failure") rather than an AI error.
I see it's used similarly in the trackOpenAIMetrics
though, so maybe it's precedent here. That said, I see we're throwing the err
here now if the trackDurationOf()
throws an error. I think if we throw an error, we don't need the trackError() call.
We can sync on this if needed, sorry if this seems nitpicky!
// Try LangChain as fallback | ||
return this._createLangChainProvider(aiConfig); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is falling back to LangChain the long term vision for this? I think it makes more sense to need to specify LangChain if that's the user's intent, otherwise fallback to error/undefined with "no valid provider specified" or similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach was to target specific providers if they were installed first, then to try the general providers. If a user only installs the LangChain provider, that will be the only one used. Using the word "fallback" wasn't a good choice here and I have adjusted it. The goal is to be a hands off approach where you can just init an AI Config and it will auto determine the correct provider.
Andrew and I talked about having a preferred provider that you could specify in the init but I held off on the initial version so we can get some feedback.
@jsonbailey Is there a plan to maintain backwards compatibility for anyone using AI configs using the currently published documentation? |
This should not cause any breaking changes for current ai config users. This adds a new initialization pathway but does not remove any existing functionality. Eventually we will plan to deprecate the openai / bedrock / vercel methods in the tracker but they are not removed at this time. I want to get some feedback on the new work before making that decision. |
Note
Switches chat to a concrete TrackedChat composed with an AIProvider, adds generic trackMetricsOf and LDAIMetrics, wires LDLogger, and updates factory/exports/tests.
BaseTrackedChat
/ProviderTrackedChat
with concreteTrackedChat
and newChatResponse
inapi/chat/types
.AIProvider
base class and updateTrackedChatFactory
to create providers (LangChain) and returnTrackedChat
.api/chat/index
andapi/index
exports accordingly.LDAIMetrics
andLDAIConfigTracker.trackMetricsOf
for generic success/token tracking.TrackedChat.invoke
(usestrackMetricsOf
).LDClientMin
now exposes optionallogger
.LDAIClientImpl.initChat
returnsTrackedChat | undefined
, passesLDLogger
to factory, and logs when disabled.trackMetricsOf
and newTrackedChat
behavior (message appends, retrieval, invoke integration).Written by Cursor Bugbot for commit 1ca87fd. This will update automatically on new commits. Configure here.