Skip to content

[AutoBump] Merge with fixes of 13ee7c21 (Dec 19) (143) #534

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 53 commits into from
Feb 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
13ee7c2
[TOSA] Add legalization for torch.aten.unfold (#3922)
justin-ngo-arm Dec 19, 2024
02fa411
[torch-mlir][doc] remove MPACT as example (#3930)
aartbik Dec 20, 2024
a6179c0
build: manually update PyTorch version (#3919)
vivekkhandelwal1 Dec 20, 2024
38a0a5a
Fix output size computation for MaxPool2D for ceil_model = true. (#3890)
sahas3 Dec 23, 2024
fee88fd
[ONNX] clarifies error message for upsample interpolation mode (#3940)
bjacobgordon Jan 6, 2025
356540a
[ONNX] Delete redundant dynamic dim check for result types (#3942)
AmosLewis Jan 7, 2025
bf594b0
[TOSA] Add reflection_pad3d lowering (#3933)
sahas3 Jan 7, 2025
f92c587
[docs] Refresh add_ops.md (#3939)
bjacobgordon Jan 8, 2025
a45356e
[TMTensor] Cast i1 to i32by extsi instead of trunci for aten scatter_…
AmosLewis Jan 10, 2025
98e4eb2
[TOSA] Add lowering for aten.expm1 (#3949)
justin-ngo-arm Jan 10, 2025
9a167e2
[TOSA] Update tosa.cast check according to TOSA v1.0 spec (#3948)
justin-ngo-arm Jan 10, 2025
4a2cbb9
Remove TOSA make_fx configuration (#3951)
mgehre-amd Jan 13, 2025
11efcda
[python] Make module imports relative in `fx.py` and `compiler_utils.…
shelkesagar29 Jan 13, 2025
62eb38b
[ONNX] improve regex matching in onnx-importer name sanitization (#3955)
zjgarvey Jan 14, 2025
040aec9
[lib/conversion] Create seed only if needed in `convert-torch-convers…
shelkesagar29 Jan 14, 2025
4f9f82d
[cmake] Enable accepting external stablehlo project (#3927)
shelkesagar29 Jan 14, 2025
09af3b6
Clarify `min_val` semantics for `torch.symbolic_int` op (#3959)
sjain-stanford Jan 14, 2025
33337fc
Migrate ci.yml to AKS cpubuilder cluster. (#3967)
saienduri Jan 17, 2025
db82bb9
Include `torch-mlir-opt` in Python wheels (#3964)
marbre Jan 17, 2025
8fa3bd9
Update GH actions with Dependabot (#3966)
marbre Jan 17, 2025
a6f452e
[AutoBump] Merge with fixes of 13ee7c21 (Dec 19)
mgehre-amd Feb 11, 2025
c88523f
[AutoBump] Merge with 02fa4118 (Dec 20)
mgehre-amd Feb 11, 2025
f0ebaac
[AutoBump] Merge with fixes of a6179c07 (Dec 20)
mgehre-amd Feb 12, 2025
30b2e76
Update xfail
mgehre-amd Feb 13, 2025
ec59592
Update xfail
mgehre-amd Feb 13, 2025
af54cba
[AutoBump] Merge with 356540af (Jan 07)
mgehre-amd Feb 13, 2025
b822be4
[AutoBump] Merge with fixes of bf594b03 (Jan 07)
mgehre-amd Feb 13, 2025
63e7a8b
[AutoBump] Merge with fixes of f92c587c (Jan 08)
mgehre-amd Feb 13, 2025
832aa86
[AutoBump] Merge with fixes of a45356e4 (Jan 10)
mgehre-amd Feb 13, 2025
d10b4d4
xfail
mgehre-amd Feb 13, 2025
daafcb1
[AutoBump] Merge with 98e4eb28 (Jan 10)
mgehre-amd Feb 13, 2025
8b59c18
[AutoBump] Merge with fixes of 9a167e2d (Jan 10)
mgehre-amd Feb 13, 2025
b8d39ab
[AutoBump] Merge with 62eb38bc (Jan 14)
mgehre-amd Feb 13, 2025
51bc032
[AutoBump] Merge with fixes of 040aec90 (Jan 14)
mgehre-amd Feb 13, 2025
c654228
Fix compiler error
mgehre-amd Feb 13, 2025
c6322e0
Merge pull request #535 from Xilinx/bump_to_02fa4118
mgehre-amd Feb 13, 2025
0240980
Merge pull request #536 from Xilinx/bump_to_a6179c07
mgehre-amd Feb 13, 2025
d787ddd
Merge pull request #539 from Xilinx/bump_to_356540af
mgehre-amd Feb 13, 2025
551df07
Merge pull request #540 from Xilinx/bump_to_bf594b03
mgehre-amd Feb 13, 2025
365684d
Merge pull request #541 from Xilinx/bump_to_f92c587c
mgehre-amd Feb 13, 2025
2168e8e
Merge pull request #542 from Xilinx/bump_to_a45356e4
mgehre-amd Feb 13, 2025
d426c0a
Merge pull request #543 from Xilinx/bump_to_98e4eb28
mgehre-amd Feb 13, 2025
54b73be
[AutoBump] Merge with 09af3b60 (Jan 14)
mgehre-amd Feb 13, 2025
979a59b
[AutoBump] Merge with fixes of 33337fc6 (Jan 17)
mgehre-amd Feb 13, 2025
ac041b4
[AutoBump] Merge with db82bb97 (Jan 17)
mgehre-amd Feb 13, 2025
6108668
[AutoBump] Merge with fixes of 8fa3bd9a (Jan 17)
mgehre-amd Feb 13, 2025
8b465e3
Merge pull request #544 from Xilinx/bump_to_9a167e2d
mgehre-amd Feb 14, 2025
7dbb08a
Merge pull request #545 from Xilinx/bump_to_62eb38bc
mgehre-amd Feb 14, 2025
9f5cca3
Merge pull request #546 from Xilinx/bump_to_040aec90
mgehre-amd Feb 14, 2025
6ae7af5
Merge pull request #547 from Xilinx/bump_to_09af3b60
mgehre-amd Feb 14, 2025
b088da1
Merge pull request #548 from Xilinx/bump_to_33337fc6
mgehre-amd Feb 14, 2025
d82abea
Merge pull request #549 from Xilinx/bump_to_db82bb97
mgehre-amd Feb 14, 2025
04d4d5f
Merge pull request #550 from Xilinx/bump_to_8fa3bd9a
mgehre-amd Feb 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 14 additions & 10 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,16 +27,6 @@ jobs:
env:
CACHE_DIR: ${{ github.workspace }}/.container-cache
steps:
- name: Configure local git mirrors
run: |
# Our stock runners have access to certain local git caches. If these
# files are available, it will prime the cache and configure git to
# use them. Practically, this eliminates network/latency for cloning
# llvm.
if [[ -x /gitmirror/scripts/trigger_update_mirrors.sh ]]; then
/gitmirror/scripts/trigger_update_mirrors.sh
/gitmirror/scripts/git_config.sh
fi
- name: "Checking out repository"
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # v3.5.0
with:
Expand All @@ -55,11 +45,25 @@ jobs:
restore-keys: |
build-test-cpp-asserts-manylinux-${{ matrix.torch-version }}-v2-

- name: "Setting up Python"
run: |
sudo apt update
sudo apt install software-properties-common -y
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt install python3.11 python3-pip -y
sudo apt-get install python3.11-dev python3.11-venv build-essential -y

- name: Install python deps (torch-${{ matrix.torch-version }})
run: |
export cache_dir="${{ env.CACHE_DIR }}"
bash build_tools/ci/install_python_deps.sh ${{ matrix.torch-version }}

- name: ccache
uses: hendrikmuhs/ccache-action@53911442209d5c18de8a31615e0923161e435875 # v1.2.16
with:
key: ${{ github.job }}-${{ matrix.torch-version }}
save: ${{ needs.setup.outputs.write-caches == 1 }}

- name: Build project
run: |
export cache_dir="${{ env.CACHE_DIR }}"
Expand Down
12 changes: 10 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,12 @@ option(TORCH_MLIR_ENABLE_STABLEHLO "Add stablehlo dialect" ON)
if(TORCH_MLIR_ENABLE_STABLEHLO)
add_definitions(-DTORCH_MLIR_ENABLE_STABLEHLO)
endif()
# It is possible that both stablehlo and torch_mlir projects are used in some compiler project.
# In this case, we don't want to use stablehlo that is downloaded by torch_mlir (in external/stablehlo)
# folder but instead want to use stablehlo that is part of top level compiler project.
# With TORCH_MLIR_USE_EXTERNAL_STABLEHLO enables, it is assumed that top level compiler project makes
# stablehlo targets AND includes available (for example with `add_subdirectory` and `include_directories`).
option(TORCH_MLIR_USE_EXTERNAL_STABLEHLO "Use stablehlo from top level project" OFF)

option(TORCH_MLIR_OUT_OF_TREE_BUILD "Specifies an out of tree build" OFF)

Expand Down Expand Up @@ -142,7 +148,8 @@ include_directories(${CMAKE_CURRENT_BINARY_DIR}/include)

function(torch_mlir_target_includes target)
set(_dirs
$<BUILD_INTERFACE:${MLIR_INCLUDE_DIRS}>
$<BUILD_INTERFACE:${MLIR_INCLUDE_DIR}>
$<BUILD_INTERFACE:${MLIR_GENERATED_INCLUDE_DIR}>
$<BUILD_INTERFACE:${TORCH_MLIR_SOURCE_DIR}/include>
$<BUILD_INTERFACE:${TORCH_MLIR_BINARY_DIR}/include>
)
Expand Down Expand Up @@ -232,7 +239,8 @@ endif()
# Getting this wrong results in building large parts of the stablehlo
# project that we don't actually depend on. Further some of those parts
# do not even compile on all platforms.
if (TORCH_MLIR_ENABLE_STABLEHLO)
# Only configure StableHLO if it isn't provided from a top-level project
if (TORCH_MLIR_ENABLE_STABLEHLO AND NOT TORCH_MLIR_USE_EXTERNAL_STABLEHLO)
set(STABLEHLO_BUILD_EMBEDDED ON)
set(STABLEHLO_ENABLE_BINDINGS_PYTHON ON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo
Expand Down
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ Torch-MLIR is primarily a project that is integrated into compilers to bridge th

* [IREE](https://github.com/iree-org/iree.git)
* [Blade](https://github.com/alibaba/BladeDISC)
* [MPACT](https://github.com/MPACT-ORG/mpact-compiler)

While most of the project is exercised via testing paths, there are some ways that an end user can directly use the APIs without further integration:

Expand Down
4 changes: 2 additions & 2 deletions build_tools/ci/build_posix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ echo "Caching to ${cache_dir}"
mkdir -p "${cache_dir}/ccache"
mkdir -p "${cache_dir}/pip"

python="$(which python)"
python="$(which python3)"
echo "Using python: $python"

export CMAKE_TOOLCHAIN_FILE="$this_dir/linux_default_toolchain.cmake"
Expand All @@ -40,7 +40,7 @@ echo "::group::CMake configure"
cmake -S "$repo_root/externals/llvm-project/llvm" -B "$build_dir" \
-GNinja \
-DCMAKE_BUILD_TYPE=Release \
-DPython3_EXECUTABLE="$(which python)" \
-DPython3_EXECUTABLE="$(which python3)" \
-DLLVM_ENABLE_ASSERTIONS=ON \
-DTORCH_MLIR_ENABLE_WERROR_FLAG=ON \
-DCMAKE_INSTALL_PREFIX="$install_dir" \
Expand Down
4 changes: 2 additions & 2 deletions build_tools/ci/install_python_deps.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ repo_root="$(cd $this_dir/../.. && pwd)"
torch_version="${1:-unknown}"

echo "::group::installing llvm python deps"
python -m pip install --no-cache-dir -r $repo_root/externals/llvm-project/mlir/python/requirements.txt
python3 -m pip install --no-cache-dir -r $repo_root/externals/llvm-project/mlir/python/requirements.txt
echo "::endgroup::"

case $torch_version in
Expand All @@ -30,5 +30,5 @@ case $torch_version in
esac

echo "::group::installing test requirements"
python -m pip install --no-cache-dir -r $repo_root/test-requirements.txt
python3 -m pip install --no-cache-dir -r $repo_root/test-requirements.txt
echo "::endgroup::"
6 changes: 3 additions & 3 deletions build_tools/ci/test_posix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ python -m e2e_testing.main --config=fx_importer_tosa -v
echo "::endgroup::"

echo "::group::Run ONNX e2e integration tests"
python -m e2e_testing.main --config=onnx -v
python3 -m e2e_testing.main --config=onnx -v
echo "::endgroup::"

case $torch_version in
Expand All @@ -27,13 +27,13 @@ case $torch_version in

# TODO: Need to verify in the stable version
echo "::group::Run FxImporter e2e integration tests"
python -m e2e_testing.main --config=fx_importer -v
python3 -m e2e_testing.main --config=fx_importer -v
echo "::endgroup::"

# AMD: Disabled stablehlo.
# TODO: Need to verify in the stable version
# echo "::group::Run FxImporter2Stablehlo e2e integration tests"
# python -m e2e_testing.main --config=fx_importer_stablehlo -v
# python3 -m e2e_testing.main --config=fx_importer_stablehlo -v
# echo "::endgroup::"
;;
stable)
Expand Down
3 changes: 0 additions & 3 deletions build_tools/python_deploy/build_linux_packages.sh
Original file line number Diff line number Diff line change
Expand Up @@ -324,9 +324,6 @@ function test_in_tree() {
;;
esac

echo ":::: Run make_fx + TOSA e2e integration tests"
python -m e2e_testing.main --config=make_fx_tosa -v

echo ":::: Run TOSA e2e integration tests"
python -m e2e_testing.main --config=tosa -v
}
Expand Down
2 changes: 1 addition & 1 deletion build_tools/update_abstract_interp_lib.sh
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ fi
# To enable this python package, manually build torch_mlir with:
# -DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON
# TODO: move this package out of JIT_IR_IMPORTER.
PYTHONPATH="${pypath}" python \
PYTHONPATH="${pypath}" python3 \
-m torch_mlir.jit_ir_importer.build_tools.abstract_interp_lib_gen \
--pytorch_op_extensions=${ext_module:-""} \
--torch_transforms_cpp_dir="${torch_transforms_cpp_dir}"
2 changes: 1 addition & 1 deletion build_tools/update_torch_ods.sh
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ set +u
# To enable this python package, manually build torch_mlir with:
# -DTORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS=ON
# TODO: move this package out of JIT_IR_IMPORTER.
PYTHONPATH="${PYTHONPATH}:${pypath}" python \
PYTHONPATH="${PYTHONPATH}:${pypath}" python3 \
-m torch_mlir.jit_ir_importer.build_tools.torch_ods_gen \
--torch_ir_include_dir="${torch_ir_include_dir}" \
--pytorch_op_extensions="${ext_module}" \
Expand Down
75 changes: 24 additions & 51 deletions docs/add_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,72 +2,49 @@

Collected links and contacts for how to add ops to torch-mlir.

<details>
<summary>Turbine Camp: Start Here</summary>
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
## [How to Add a Torch Operator](https://github.com/llvm/torch-mlir/blob/main/docs/Torch-ops-E2E-implementation.md)

Written & maintained by @renxida

Guides by other folks that were used during the creation of this document:
- [Chi Liu](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2)
- [Sunsoon](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)

## Before you begin...

Nod-ai maintains the pipeline below, which allows us to take a ML model from e.g. huggingface, and compile it to a variety of devices including llvm-cpu, rocm and cuda and more as an optimized `vmfb` binary.

1. The pipeline begins with a huggingface model, or some other supported source like llama.cpp.
2. [nod-ai/SHARK-Turbine](https://github.com/nod-ai/SHARK-Turbine) takes a huggingface model and exports a `.mlir` file.
3. **[llvm/torch-mlir](https://github.com/llvm/torch-mlir)**, which you will be working on in turbine-camp, will lower torchscript, torch dialect, and torch aten ops further into a mixture `linalg` or `math` MLIR dialects (with occasionally other dialects in the mix)
4. [IREE](https://github.com/openxla/iree) converts the final `.mlir` file into a binary (typically `.vmfb`) for running on a device (llvm-cpu, rocm, vulcan, cuda, etc).

The details of how we do it and helpful commands to help you set up each repo is in [Sungsoon's Shark Getting Started Google Doc](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)

PS: IREE is pronounced Eerie, and hence the ghost icon.

## How to begin
0. Set up torch-mlir according to the instructions here: https://github.com/llvm/torch-mlir/blob/main/docs/development.md
1. You will start by adding support for 2 ops in torch-mlir, to get you familiar with the center of our pipeline. Begin by reading [torch-mlir's documentation on how to implement a new torch op](https://github.com/llvm/torch-mlir/blob/main/docs/Torch-ops-E2E-implementation.md), and set up `llvm/torch_mlir` using https://github.com/llvm/torch-mlir/blob/main/docs/development.md
2. Pick 1 of the yet-unimplemented from the following. You should choose something that looks easy to you. **Make sure you create an issue by clicking the little "target" icon to the right of the op, thereby marking the op as yours**
- [TorchToLinalg ops tracking issue](https://github.com/nod-ai/SHARK-Turbine/issues/347)
- [TorchOnnnxToTorch ops tracking issue](https://github.com/nod-ai/SHARK-Turbine/issues/215)
3. Implement it. For torch -> linalg, see the how to torchop section below. For Onnx ops, see how to onnx below.
5. Make a pull request and reference your issue. When the pull request is closed, also close your issue to mark the op as done

</details>
## How to Add a Conversion for an Operator

### How to TorchToLinalg

You will need to do 5 things:

- make sure -DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON is added during build. This is to enable the python file used in `build_tools/update_torch_ods.sh` and `build_tools/update_abstract_interp_lib.sh`
- make sure the op exists in `torch_ods_gen.py`, and then run `build_tools/update_torch_ods.sh`, and then build. This generates `GeneratedTorchOps.td`, which is used to generate the cpp and h files where ops function signatures are defined.
- Reference [torch op registry](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/csrc/jit/passes/utils/op_registry.cpp#L21)
- Reference [torch op registry](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/csrc/jit/passes/utils/op_registry.cpp#L21)
- make sure the op exists in `abstract_interp_lib_gen.py`, and then run `build_tools/update_abstract_interp_lib.sh`, and then build. This generates `AbstractInterpLib.cpp`, which is used to generate the cpp and h files where ops function signatures are defined.
- Reference [torch shape functions](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/jit/_shape_functions.py#L1311)
- Reference [torch shape functions](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/jit/_shape_functions.py#L1311)
- write test cases. They live in `projects/pt1`. See the [Dec 2023 example](https://github.com/llvm/torch-mlir/pull/2640/files).
- implement the op in one of the `lib/Conversion/TorchToLinalg/*.cpp` files

Reference Examples

- [A Dec 2023 example with the most up to date lowering](https://github.com/llvm/torch-mlir/pull/2640/files)
- [Chi's simple example of adding op lowering](https://github.com/llvm/torch-mlir/pull/1454) useful instructions and referring links for you to understand the op lowering pipeline in torch-mlir in the comments

Resources:
- how to set up torch-mlir: [https://github.com/llvm/torch-mlir/blob/main/docs/development.md](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#checkout-and-build-from-source)
- torch-mlir doc on how to debug and test: [ttps://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing)

- [how to set up torch-mlir](https://github.com/llvm/torch-mlir/blob/main/docs/development.md)
- [torch-mlir doc on how to debug and test](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing)
- [torch op registry](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/csrc/jit/passes/utils/op_registry.cpp#L21)
- [torch shape functions](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/jit/_shape_functions.py#L1311)

### How to TorchOnnxToTorch
0. Generate the big folder of ONNX IR. Use https://github.com/llvm/torch-mlir/blob/main/test/python/onnx_importer/import_smoke_test.py . Alternatively, if you're trying to support a certain model, convert that model to onnx IR with
```

1. Generate the big folder of ONNX IR. Use [this Python script](https://github.com/llvm/torch-mlir/blob/main/test/python/onnx_importer/import_smoke_test.py). Alternatively, if you're trying to support a certain model, convert that model to onnx IR with

```shell
optimum-cli export onnx --model facebook/opt-125M fb-opt
python -m torch_mlir.tools.import_onnx fb-opt/model.onnx -o fb-opt-125m.onnx.mlir
```
2. Find an instance of the Op that you're trying to implement inside the smoke tests folder or the generated model IR, and write a test case. Later you will save it to one of the files in `torch-mlir/test/Conversion/TorchOnnxToTorch`, but for now feel free to put it anywhere.
3. Implement the op in `lib/Conversion/TorchOnnxToTorch/something.cpp`.
4. Test the conversion by running `./build/bin/torch-mlir-opt -split-input-file -verify-diagnostics -convert-torch-onnx-to-torch your_mlir_file.mlir`. For more details, see https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing . Xida usually creates a separate MLIR file to test it to his satisfaction before integrating it into one of the files at `torch-mlir/test/Conversion/TorchOnnxToTorch`.

1. Find an instance of the Op that you're trying to implement inside the smoke tests folder or the generated model IR, and write a test case. Later you will save it to one of the files in `torch-mlir/test/Conversion/TorchOnnxToTorch`, but for now feel free to put it anywhere.
1. Implement the op in `lib/Conversion/TorchOnnxToTorch/something.cpp`.
1. Test the conversion by running `./build/bin/torch-mlir-opt -split-input-file -verify-diagnostics -convert-torch-onnx-to-torch your_mlir_file.mlir`. For more details, see [the testing section of the doc on development](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing). Xida usually creates a separate MLIR file to test it to his satisfaction before integrating it into one of the files at `torch-mlir/test/Conversion/TorchOnnxToTorch`.

Helpful examples:

- [A Dec 2023 example where an ONNX op is implemented](https://github.com/llvm/torch-mlir/pull/2641/files#diff-b584b152020af6d2e5dbf62a08b2f25ed5afc2c299228383b9651d22d44b5af4R493)
- [Vivek's example of ONNX op lowering](https://github.com/llvm/torch-mlir/commit/dc9ea08db5ac295b4b3f91fc776fef6a702900b9)

Expand All @@ -77,7 +54,8 @@ Helpful examples:
`. Please don't just paste the generated tests - reference them to write your own

## Links
- IMPORTANT: read the LLVM style guide: https://llvm.org/docs/CodingStandards.html#use-early-exits-and-continue-to-simplify-code

- IMPORTANT: read [the LLVM style guide](https://llvm.org/docs/CodingStandards.html#style-issues)
- Tutorials
- [Sungsoon's Shark Getting Started Google Doc](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
- This document contains commands that would help you set up shark and run demos
Expand All @@ -96,18 +74,12 @@ Helpful examples:
- [Model and Op Support](https://github.com/nod-ai/SHARK-Turbine/issues/119)
- [ONNX op support](https://github.com/nod-ai/SHARK-Turbine/issues/215)

## [Chi's useful commands for debugging torch mlir](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2)

## Chi's useful commands for debugging torch mlir

https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2

## How to write test cases and test your new op

https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing


## [How to write test cases and test your new op](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing)

## How to set up vs code and intellisence for [torch-mlir]

Xida: This is optional. If you're using VS code like me, you might want to set it up so you can use the jump to definition / references, auto fix, and other features.

Feel free to contact me on discord if you have trouble figuring this out.
Expand Down Expand Up @@ -153,4 +125,5 @@ under `torch-mlir`
"cmake.cmakePath": "/home/xida/miniconda/envs/torch-mlir/bin/cmake", // make sure this is a cmake that knows where your python is
}
```

The important things to note are the `cmake.configureArgs`, which specify the location of your torch mlir, and the `cmake.sourceDirectory`, which indicates that CMAKE should not build from the current directory and should instead build from `externals/llvm-project/llvm`
Loading
Loading