Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blackwell Support Codegen #3103

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/documentation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
submodules: recursive

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/update-relax.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:

steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
submodules: true

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/windows-build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
- uses: actions/checkout@v3
with:
submodules: 'recursive'
- uses: conda-incubator/setup-miniconda@v2
- uses: conda-incubator/setup-miniconda@v3
with:
activate-environment: mlc-llm-build
channel-priority: strict
Expand Down
10 changes: 5 additions & 5 deletions ci/jenkinsfile.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@
import org.jenkinsci.plugins.pipeline.modeldefinition.Utils

run_cpu = "bash ci/bash.sh mlcaidev/ci-cpu:4d61e5d -e GPU cpu -e MLC_CI_SETUP_DEPS 1"
run_cuda = "bash ci/bash.sh mlcaidev/ci-cu121:4d61e5d -e GPU cuda-12.1 -e MLC_CI_SETUP_DEPS 1"
run_rocm = "bash ci/bash.sh mlcaidev/ci-rocm57:4d61e5d -e GPU rocm-5.7 -e MLC_CI_SETUP_DEPS 1"
run_cuda = "bash ci/bash.sh mlcaidev/ci-cu128:4d61e5d -e GPU cuda-12.8 -e MLC_CI_SETUP_DEPS 1"
run_rocm = "bash ci/bash.sh mlcaidev/ci-rocm63:4d61e5d -e GPU rocm-6.3 -e MLC_CI_SETUP_DEPS 1"

pkg_cpu = "bash ci/bash.sh mlcaidev/package-rocm61:254d630 -e GPU cpu -e MLC_CI_SETUP_DEPS 1"
pkg_cuda = "bash ci/bash.sh mlcaidev/package-cu122:254d630 -e GPU cuda-12.2 -e MLC_CI_SETUP_DEPS 1"
pkg_rocm = "bash ci/bash.sh mlcaidev/package-rocm61:254d630 -e GPU rocm-6.1 -e MLC_CI_SETUP_DEPS 1"
pkg_cpu = "bash ci/bash.sh mlcaidev/package-rocm62:254d630 -e GPU cpu -e MLC_CI_SETUP_DEPS 1"
pkg_cuda = "bash ci/bash.sh mlcaidev/package-cu128:254d630 -e GPU cuda-12.8 -e MLC_CI_SETUP_DEPS 1"
pkg_rocm = "bash ci/bash.sh mlcaidev/package-rocm63:254d630 -e GPU rocm-6.3 -e MLC_CI_SETUP_DEPS 1"


def per_exec_ws(folder) {
Expand Down
2 changes: 1 addition & 1 deletion ci/task/build_lib.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ if [[ ${GPU} == rocm* ]]; then
elif [[ ${GPU} == cuda* ]]; then
echo set\(USE_VULKAN ON\) >>config.cmake
echo set\(CMAKE_CUDA_COMPILER_LAUNCHER ccache\) >>config.cmake
echo set\(CMAKE_CUDA_ARCHITECTURES "80;90"\) >>config.cmake
echo set\(CMAKE_CUDA_ARCHITECTURES "80;90;100;120"\) >>config.cmake
echo set\(CMAKE_CUDA_FLAGS \"\$\{CMAKE_CUDA_FLAGS\} -t $NUM_THREADS\"\) >>config.cmake
echo set\(USE_CUDA ON\) >>config.cmake
echo set\(USE_CUBLAS ON\) >>config.cmake
Expand Down
4 changes: 2 additions & 2 deletions ci/task/test_model_compile.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ pip install --force-reinstall wheels/*.whl

if [[ ${GPU} == cuda* ]]; then
TARGET=cuda
pip install --pre -U --no-index -f https://mlc.ai/wheels mlc-ai-nightly-cu123
pip install --pre -U --no-index -f https://mlc.ai/wheels mlc-ai-nightly-cu128
export LD_LIBRARY_PATH=/usr/local/cuda/compat/:$LD_LIBRARY_PATH
elif [[ ${GPU} == rocm* ]]; then
TARGET=rocm
pip install --pre -U --no-index -f https://mlc.ai/wheels mlc-ai-nightly-rocm57
pip install --pre -U --no-index -f https://mlc.ai/wheels mlc-ai-nightly-rocm63
elif [[ ${GPU} == metal ]]; then
TARGET=metal
pip install --pre -U --force-reinstall -f https://mlc.ai/wheels mlc-ai-nightly-cpu
Expand Down
2 changes: 1 addition & 1 deletion ci/task/test_unittest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ if [[ -n ${MLC_CI_SETUP_DEPS:-} ]]; then
# Install dependency
pip install --force-reinstall wheels/*.whl
pip install --quiet pytest
pip install --pre -U --no-index -f https://mlc.ai/wheels mlc-ai-nightly-cu123
pip install --pre -U --no-index -f https://mlc.ai/wheels mlc-ai-nightly-cu128
export LD_LIBRARY_PATH=/usr/local/cuda/compat/:$LD_LIBRARY_PATH
fi

Expand Down
4 changes: 2 additions & 2 deletions cmake/gen_cmake_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,12 @@
if use_flashInfer:
while True:
user_input = input("Enter your CUDA compute capability: ")
if user_input in ["80", "86", "89", "90"]:
if user_input in ["80", "86", "89", "90", "100", "120"]:
cmake_config_str += f"set(FLASHINFER_CUDA_ARCHITECTURES {user_input})\n"
cmake_config_str += f"set(CMAKE_CUDA_ARCHITECTURES {user_input})\n"
break
else:
print(f"Invalid input: {user_input}. FlashInfer requires 80, 86, 89, or 90.")
print(f"Invalid input: {user_input}. FlashInfer requires 80, 86, 89, 90, 100 or 120")

print("\nWriting the following configuration to config.cmake...")
print(cmake_config_str)
Expand Down
14 changes: 14 additions & 0 deletions docs/install/tvm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,13 @@ A nightly prebuilt Python package of Apache TVM Unity is provided.
conda activate your-environment
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly-cu123

.. tab:: CUDA 12.8

.. code-block:: bash

conda activate your-environment
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly-cu128

.. tab:: ROCm 6.1

.. code-block:: bash
Expand All @@ -67,6 +74,13 @@ A nightly prebuilt Python package of Apache TVM Unity is provided.
conda activate your-environment
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly-rocm62

.. tab:: ROCm 6.3

.. code-block:: bash

conda activate your-environment
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly-rocm63

.. tab:: Vulkan

Supported in all Linux packages.
Expand Down