Skip to content

Commit

Permalink
CI: Add custom GitHub Actions job to run clang-tidy (#5235)
Browse files Browse the repository at this point in the history
Follow-up from #4983 ; should be merged after that.

- [x] Adjust clang-tidy script to only run CMAKE to generate the compile-commands db and not to actually build.
- [x] #5496

Authors:
  - Simon Adorf (https://github.com/csadorf)
  - Bradley Dice (https://github.com/bdice)
  - Dante Gama Dessavre (https://github.com/dantegd)

Approvers:
  - AJ Schmidt (https://github.com/ajschmidt8)
  - Dante Gama Dessavre (https://github.com/dantegd)

URL: #5235
  • Loading branch information
csadorf authored Jul 13, 2023
1 parent c3d4dbb commit 3e59b65
Show file tree
Hide file tree
Showing 14 changed files with 280 additions and 69 deletions.
11 changes: 11 additions & 0 deletions .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ jobs:
pr-builder:
needs:
- checks
- clang-tidy
- conda-cpp-build
- conda-cpp-tests
- conda-python-build
Expand All @@ -29,6 +30,16 @@ jobs:
uses: rapidsai/shared-action-workflows/.github/workflows/checks.yaml@cuda-120
with:
enable_check_generated_files: false
clang-tidy:
needs: checks
secrets: inherit
uses: rapidsai/shared-action-workflows/.github/workflows/[email protected]
with:
build_type: pull-request
node_type: "cpu8"
arch: "amd64"
container_image: "rapidsai/ci:latest"
run_script: "ci/run_clang_tidy.sh"
conda-cpp-build:
needs: checks
secrets: inherit
Expand Down
41 changes: 41 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,47 @@ please see the `.pre-commit-config.yaml` file.
of files are up-to-date and in the correct format.
- `codespell`: Checks for spelling mistakes

### Clang-tidy

In order to maintain high-quality code, cuML uses not only pre-commit hooks
featuring various formatters and linters but also the clang-tidy tool.
Clang-tidy is designed to detect potential issues within the C and C++ code. It
is typically run as part of our continuous integration (CI) process.

While it's generally unnecessary for contributors to run clang-tidy locally,
there might be cases where you would want to do so. There are two primary
methods to run clang-tidy on your local machine: using Docker or Conda.

* **Docker**

1. Navigate to the repository root directory.
2. Run the following Docker command:

```bash
docker run --rm --pull always \
--mount type=bind,source="$(pwd)",target=/opt/repo --workdir /opt/repo \
-e SCCACHE_S3_NO_CREDENTIALS=1 \
rapidsai/ci:latest /opt/repo/ci/run_clang_tidy.sh
```


* **Conda**

1. Navigate to the repository root directory.
2. Create and activate the needed conda environment:
```bash
conda env create --force -n cuml-clang-tidy -f conda/environments/clang_tidy_cuda-118_arch-x86_64.yaml
conda activate cuml-clang-tidy
```
3. Generate the compile command database with
```bash
./build.sh --configure-only libcuml
```
3. Run clang-tidy with the following command:
```bash
python cpp/scripts/run-clang-tidy.py --config pyproject.toml
```

### Managing PR labels

Each PR must be labeled according to whether it is a "breaking" or "non-breaking" change (using Github labels). This is used to highlight changes that users should know about when upgrading.
Expand Down
10 changes: 6 additions & 4 deletions build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ ARGS=$*
REPODIR=$(cd $(dirname $0); pwd)

VALIDTARGETS="clean libcuml cuml cpp-mgtests prims bench prims-bench cppdocs pydocs"
VALIDFLAGS="-v -g -n --allgpuarch --singlegpu --nolibcumltest --nvtx --show_depr_warn --codecov --ccache -h --help "
VALIDFLAGS="-v -g -n --allgpuarch --singlegpu --nolibcumltest --nvtx --show_depr_warn --codecov --ccache --configure-only -h --help "
VALIDARGS="${VALIDTARGETS} ${VALIDFLAGS}"
HELP="$0 [<target> ...] [<flag> ...]
where <target> is:
Expand All @@ -46,6 +46,7 @@ HELP="$0 [<target> ...] [<flag> ...]
--codecov - Enable code coverage support by compiling with Cython linetracing
and profiling enabled (WARNING: Impacts performance)
--ccache - Use ccache to cache previous compilations
--configure-only - Invoke CMake without actually building
--nocloneraft - CMake will clone RAFT even if it is in the environment, use this flag to disable that behavior
--static-treelite - Force CMake to use the Treelite static libs, cloning and building them if necessary
Expand Down Expand Up @@ -133,6 +134,7 @@ LONG_ARGUMENT_LIST=(
"ccache"
"nolibcumltest"
"nocloneraft"
"configure-only"
)

# Short arguments
Expand Down Expand Up @@ -260,7 +262,7 @@ if completeBuild || hasArg libcuml || hasArg prims || hasArg bench || hasArg pri
fi

# If `./build.sh cuml` is called, don't build C/C++ components
if completeBuild || hasArg libcuml || hasArg prims || hasArg bench || hasArg cpp-mgtests; then
if (! hasArg --configure-only) && (completeBuild || hasArg libcuml || hasArg prims || hasArg bench || hasArg cpp-mgtests); then
cd ${LIBCUML_BUILD_DIR}
if [ -n "${INSTALL_TARGET}" ]; then
cmake --build ${LIBCUML_BUILD_DIR} -j${PARALLEL_LEVEL} ${build_args} --target ${INSTALL_TARGET} ${VERBOSE_FLAG}
Expand All @@ -269,14 +271,14 @@ if completeBuild || hasArg libcuml || hasArg prims || hasArg bench || hasArg cpp
fi
fi

if hasArg cppdocs; then
if (! hasArg --configure-only) && hasArg cppdocs; then
cd ${LIBCUML_BUILD_DIR}
cmake --build ${LIBCUML_BUILD_DIR} --target docs_cuml
fi


# Build and (optionally) install the cuml Python package
if completeBuild || hasArg cuml || hasArg pydocs; then
if (! hasArg --configure-only) && (completeBuild || hasArg cuml || hasArg pydocs); then
# Append `-DFIND_CUML_CPP=ON` to CUML_EXTRA_CMAKE_ARGS unless a user specified the option.
SKBUILD_EXTRA_CMAKE_ARGS="${CUML_EXTRA_CMAKE_ARGS}"
if [[ "${CUML_EXTRA_CMAKE_ARGS}" != *"DFIND_CUML_CPP"* ]]; then
Expand Down
22 changes: 22 additions & 0 deletions ci/run_clang_tidy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.

set -euo pipefail

rapids-logger "Create clang_tidy conda environment"
. /opt/conda/etc/profile.d/conda.sh

rapids-dependency-file-generator \
--output conda \
--file_key clang_tidy \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml

rapids-mamba-retry env create --force -f env.yaml -n clang_tidy
# Temporarily allow unbound variables for conda activation.
set +u && conda activate clang_tidy && set -u

./build.sh --configure-only libcuml

rapids-logger "Run clang-tidy"

python cpp/scripts/run-clang-tidy.py --config pyproject.toml
38 changes: 38 additions & 0 deletions conda/environments/clang_tidy_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# This file is generated by `rapids-dependency-file-generator`.
# To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
- conda-forge
- nvidia
dependencies:
- c-compiler
- clang-tools==15.0.7
- clang==15.0.7
- cmake>=3.26.4
- cuda-version=11.8
- cudatoolkit
- cxx-compiler
- gcc_linux-64=11.*
- gmock>=1.13.0
- gtest>=1.13.0
- libcublas-dev=11.11.3.6
- libcublas=11.11.3.6
- libcufft-dev=10.9.0.58
- libcufft=10.9.0.58
- libcumlprims==23.8.*
- libcurand-dev=10.3.0.86
- libcurand=10.3.0.86
- libcusolver-dev=11.4.1.48
- libcusolver=11.4.1.48
- libcusparse-dev=11.7.5.86
- libcusparse=11.7.5.86
- libraft-headers==23.8.*
- libraft==23.8.*
- librmm==23.8.*
- ninja
- nvcc_linux-64=11.8
- sysroot_linux-64==2.17
- tomli
name: clang_tidy_cuda-118_arch-x86_64
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ struct buffer {
}
return result;
}()},
data_{[this, input_data, mem_type]() {
data_{[input_data, mem_type]() {
auto result = data_store{};
switch (mem_type) {
case device_type::cpu: result = non_owning_buffer<device_type::cpu, T>{input_data}; break;
Expand Down Expand Up @@ -134,7 +134,7 @@ struct buffer {
}
return result;
}()},
data_{[this, &other, mem_type, device, stream]() {
data_{[this, &other, mem_type, stream]() {
auto result = data_store{};
auto result_data = static_cast<T*>(nullptr);
if (mem_type == device_type::cpu) {
Expand Down
Loading

0 comments on commit 3e59b65

Please sign in to comment.