-
Notifications
You must be signed in to change notification settings - Fork 1.9k
tweak: add chatMaxRetries to config #2116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
can you put this flag under |
Changed. Here's a log from a session where I changed my config
And here's a log from a session where I used the default config.
|
Why has this become stale? We cannot use OpenAI models, and we need a solution ( |
Seems like the team chose to just increase the maxRetries and not expose the config to the users Let me know if you want me to update this PR with the conflict resolution, if not you can close this. |
I'd be interested nonetheless, because a part of me is curious to see how things would go if I left the agent overnight. Caveat is, if I leave it overnight, and it gets rate limited, or servers are overloaded, etc. I wouldn't want it to stop; I'd rather have it keep trying as long as it can. So I could set the limit to like 999 to never stop trying. |
Ideally exponential backoff would be perfect with this, but I'm not sure if the dev team is interested on that for the time being. |
Exponential backoff, and Retry-After behavior is already implemented at ai.util.retry-with-exponential-backoff and it is used in the streamText. So this PR would only let users increase the number of retries tried with exponential backoff. |
Might be worth closing the issue for exponential backoff then. I still think it'd be useful nonetheless. |
Solved the conflicts just in case. |
f16de3d
to
cc0d460
Compare
This PR tries to give users some power to work around AI_RetryError.
Some related issues this is aiming are:
This PR is also related to feat: Add retry mechanism for Anthropic overloaded errors however, in #1027 the retry backoff is re-implemented and I'm not sure this is the right approach.
Exponential backoff, and Retry-After behavior is already implemented at ai.util.retry-with-exponential-backoff and it is used in the streamText function here.
I could not find handles in the
ai
package where we could expose the backoff parameters (delay, maximumDelay, factor, etc..).If someone can point me to where I should update the documentation, I can do that.
Here's a log from a session where I changed my
config.chatMaxRetries
to 5.