Skip to content

Commit

Permalink
Merge pull request #657 from cgruver/compact-build
Browse files Browse the repository at this point in the history
Update intel-gpu Containerfile to reduce the size of the builder image
  • Loading branch information
ericcurtin authored Jan 30, 2025
2 parents 4851d60 + d654122 commit adc3bbe
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions container-images/intel-gpu/Containerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@ FROM quay.io/fedora/fedora:41 as builder

COPY intel-gpu/oneAPI.repo /etc/yum.repos.d/

RUN dnf install -y lspci clinfo intel-opencl g++ cmake git libcurl-devel intel-oneapi-base-toolkit ; \
RUN dnf install -y intel-opencl g++ cmake git tar libcurl-devel intel-oneapi-mkl-sycl-devel intel-oneapi-dnnl-devel intel-oneapi-compiler-dpcpp-cpp ; \
git clone https://github.com/ggerganov/llama.cpp.git -b b4523 ; \
cd llama.cpp ; \
mkdir -p build ; \
cd build ; \
source /opt/intel/oneapi/setvars.sh ; \
cmake .. -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_CURL=ON -DGGML_CCACHE=OFF -DGGML_NATIVE=ON ; \
cmake .. -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_CURL=ON -DGGML_CCACHE=OFF -DGGML_NATIVE=OFF ; \
cmake --build . --config Release -j -v ; \
cmake --install . --prefix /llama-cpp

Expand Down

0 comments on commit adc3bbe

Please sign in to comment.