What's Changed
- chore: use absolute link for the RamaLama logo by @benoitf in #781
- Reuse Ollama cached image when available by @kush-gupt in #782
- Add env var RAMALAMA_GPU_DEVICE to allow for explicit declaration of the GPU device to use by @cgruver in #773
- Change RAMALAMA_GPU_DEVICE to RAMALAMA_DEVICE for AI accelerator device override by @cgruver in #786
- Add Security information to README.md by @rhatdan in #787
- Fix exiting on llama-serve when user hits ^c by @rhatdan in #785
- Check if file exists before sorting them into a list by @kush-gupt in #784
- Add ramalama run --keepalive option by @rhatdan in #789
- Stash output from container_manager by @rhatdan in #790
- Install llama.cpp for mac and nocontainer tests by @rhatdan in #792
- _engine is set to None or has a value by @ericcurtin in #793
- Only run dnf commands on platforms that have dnf by @ericcurtin in #794
- Add ramalama rag command by @rhatdan in #501
- Attempt to use build_llama_and_whisper.sh by @rhatdan in #795
- Change --network-mode to --network by @ericcurtin in #800
- Add some more gfx values to the default list by @ericcurtin in #806
- Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1739449058 by @renovate in #808
- Prepare containers to run with ai-lab-recipes by @rhatdan in #803
- If ngl is not specified by @ericcurtin in #802
- feat: add ramalama labels about the execution on top of container by @benoitf in #810
- Add run and serve arguments for --device and --privileged by @cgruver in #809
- chore: rewrite readarray function to make it portable by @benoitf in #815
- chore: replace RAMALAMA label by ai.ramalama by @benoitf in #814
- Upgrade from 6.3.1 to 6.3.2 by @ericcurtin in #816
- Removed error wrapping in urlopen by @engelmi in #818
- Encountered a bug where this function was returning -1 by @ericcurtin in #817
- Align runtime arguments with run, serve, bench, and perplexity by @cgruver in #820
- README: fix inspect command description by @kush-gupt in #826
- Pin dev dependencies to major version and improve formatting + linting by @engelmi in #824
- README: Fix typo by @bupd in #827
- Switch apt-get to apt by @ericcurtin in #832
- Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1739751568 by @renovate in #834
- Add entrypoint container images by @rhatdan in #819
- HuggingFace Cache Implementation by @kush-gupt in #833
- Make serve by default expose network by @ericcurtin in #830
- Fix up man page help verifacation by @rhatdan in #835
- Fix handling of --privileged flag by @rhatdan in #821
- chore: fix links of llama.cpp repository by @benoitf in #841
- Unify CLI options (verbosity, version) by @mkesper in #685
- Add system tests to pull from the Hugging Face cache by @kush-gupt in #846
- Just one add_argument call for --dryrun/--dry-run by @ericcurtin in #847
- Fix ramalama info to display NVIDIA and amd GPU information by @rhatdan in #848
- Remove LICENSE header from gpu_detector.py by @ericcurtin in #850
- Allowing modification of pull policy by @rhatdan in #843
- Include instructions for installing on Fedora 42+ by @stefwalter in #849
- Bump to 0.6.1 by @rhatdan in #851
New Contributors
- @benoitf made their first contribution in #781
- @bupd made their first contribution in #827
- @mkesper made their first contribution in #685
- @stefwalter made their first contribution in #849
Full Changelog: v0.6.0...v0.6.1