-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare containers to run with ai-lab-recipes #803
Conversation
Reviewer's Guide by SourceryThis pull request adds scripts to run llama and whisper servers, configurable via environment variables, and modifies the containerfiles to copy the scripts to No diagrams generated as the changes look simple and do not need a visual representation. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @rhatdan - I've reviewed your changes - here's some feedback:
Overall Comments:
- Consider using
COPY --from=builder
to copy scripts from a build stage instead of the host. - It looks like the
RUN
command invulkan/Containerfile
should executevulkan/build_llama_and_whisper.sh
.
Here's what I looked at during the review
- 🟡 General issues: 1 issue found
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
CHAT_FORMAT="--chat_template ${MODEL_CHAT_FORMAT}" | ||
fi | ||
|
||
if [ ${MODEL_PATH} ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Quote the variable in the if condition.
To avoid potential word-splitting or globbing issues when MODEL_PATH is empty or contains spaces, consider using quotes, e.g., if [ -n "${MODEL_PATH}" ]; then.
if [ ${MODEL_PATH} ]; then | |
if [ -n "${MODEL_PATH}" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these scripts look ok but my question is:
who will call these scripts ?
because it does not seems that there is an entrypoint calling these scripts automatically
I considered doing the entrypoint, but I don't want to have separate scripts for lama-server and whisper-server, so the idea would be to change the quadlets used to launch the model server. |
Add two new scripts llama-server.sh and whisper-server.sh which can handle environment variables from the ai-lab-recipes. Signed-off-by: Daniel J Walsh <[email protected]>
I would expect to have a default mode as I'm using ramalama
today it's defaulting to bash so at elast I would expect the list of scripts I could call ? but for example if there is a or if I provide env variable RAMALA_something="serve" it's running the script to serve a model |
Lets merge this and we can continue the conversation, since this is not a breaking change for RamaLama. |
Add two new scripts llama-server.sh and whisper-server.sh which can handle environment variables from the ai-lab-recipes.
Summary by Sourcery
New Features:
llama-server.sh
andwhisper-server.sh
to support running the containers with ai-lab-recipes.