Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On macOS this was returning an incorrect path #741

Merged
merged 1 commit into from
Feb 5, 2025
Merged

On macOS this was returning an incorrect path #741

merged 1 commit into from
Feb 5, 2025

Conversation

ericcurtin
Copy link
Collaborator

@ericcurtin ericcurtin commented Feb 5, 2025

$ ramalama --dryrun run granite-code
llama-run -c 2048 --temp 0.8 /path/to/model

At least try and show the correct path.

Summary by Sourcery

Bug Fixes:

  • Fix an issue where the model path was incorrect on macOS when running with --dryrun.

Copy link
Contributor

sourcery-ai bot commented Feb 5, 2025

Reviewer's Guide by Sourcery

The pull request fixes an issue where the model path was hardcoded to /path/to/model when the --dryrun flag was enabled. The change removes this hardcoded path, allowing the program to correctly resolve the model path.

Sequence diagram for model path resolution

sequenceDiagram
    participant Client
    participant ModelManager

    Client->>ModelManager: get_model_path(args)
    alt Before Fix - with --dryrun
        ModelManager-->>Client: Return '/path/to/model'
    else After Fix - with --dryrun
        ModelManager->>ModelManager: exists(args)
        alt Model exists
            ModelManager-->>Client: Return actual model path
        else Model doesn't exist
            ModelManager->>ModelManager: pull(args)
            ModelManager-->>Client: Return downloaded model path
        end
    end
Loading

File-Level Changes

Change Details Files
Removed hardcoded model path when using the --dryrun flag.
  • Removed the conditional statement that returned /path/to/model when args.dryrun was true.
ramalama/model.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ericcurtin - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Since you've removed the dryrun override, please verify that the behavior during dry runs is still correct on macOS. Double-check that tests (or add tests if needed) cover the scenario to ensure the new logic doesn't affect the expected behavior during a dry run.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin ericcurtin force-pushed the dryrun-fix branch 4 times, most recently from 1a68aa1 to 61cfff1 Compare February 5, 2025 14:43
@ericcurtin ericcurtin changed the title On native macOS this was returning an incorrect path On macOS this was returning an incorrect path Feb 5, 2025
$ ramalama --dryrun run granite-code
llama-run -c 2048 --temp 0.8 /path/to/model

At least try and show the correct path.

Signed-off-by: Eric Curtin <[email protected]>
@ericcurtin
Copy link
Collaborator Author

Very tempting to turn off Fedora 42 packit build for a while...

model_path = self.exists(args)
if model_path:
return model_path

if args.dryrun:
return "/path/to/model"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, should we return something that indicates the model doesn't exist in the store already and we'd try to pull it?

Copy link
Collaborator Author

@ericcurtin ericcurtin Feb 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should pull for "--dryrun", this is kinda just for, please print an approximate "podman run" command, "--dryrun" implies, don't do things like pull.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, nice fix though.

You can actually see the real command by executing ramalama --debug run MODEL.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we should print an error in the case that the model does not exists. But I could be swayed either way.

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2025

LGTM

@rhatdan rhatdan merged commit 965bdf2 into main Feb 5, 2025
14 of 16 checks passed
@ericcurtin ericcurtin deleted the dryrun-fix branch February 5, 2025 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants