-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving resilience to inaccurate code generation #42
Comments
Agree there is room for improvement in retrying/feeding error messages back into the model. Inviting the community to contribute PRs – it’s out of scope for what I wanted to build personally. Maybe GPT-5 is good enough as to not hallucinate variable names? 😄 |
Although openai has now made the code interpreter available to all plus users, the project is still very cool, I have a question is it as powerful as the official plugin |
what kind of work would need to be done to run this on say a local llm with ooba ( has an openai compatible api) |
Working on this one. |
@dasmy I have two ideas in mind:
|
🔖 Feature description
First of all, really cool project! I found gpt-code-ui when looking for an alternative to Code Interpreter/Notable that I could run locally.
I have noticed that gpt-code-ui is not quite as resilient to mistakes that it makes when generating code, specifically when compared to something like ChatGPT + Noteable plugin. For example, if gpt-code-ui makes a mistaken assumption about the name of a dataframe row in code that it generates, execution will fail and it will give up, whereas in the Noteable scenario ChatGPT is more likely to proactively inspect the results and attempt to fix it.
✔️ Solution
Instead of just outputting the errors associated with a failed execution, proactively inspect the error and attempt a fix/re-run.
❓ Alternatives
No response
📝 Additional Context
No response
Acknowledgements
The text was updated successfully, but these errors were encountered: