This repository is based on the TensorFlow-custom-op repository, for a detailed guide about how to add an op we refer to this template.
The c++-toolchains are dependent on the latest tensorflow-custom-op docker container, to set it up run
(tested with tensorflow/tensorflow:2.2.0-custom-op-gpu-ubuntu16
)
docker pull tensorflow/tensorflow:custom-op-gpu-ubuntu16
sudo docker run --gpus all --privileged -it -v ${PWD}:/working_dir -w /working_dir tensorflow/tensorflow:custom-op-gpu-ubuntu16
root@docker: ./configure.sh
and answer the first question with yes y
, and specify the TensorFlow version you want to build for.
(tested with TF version 2.1
, 2.2
, 2.3
)
To compile the pip package run
bazel build build_pip_pkg
bazel-bin/build_pip_pkg artifacts
The package .whl
is located in artifacts/
, by default it should be a python3 package.
(tested with python3.6
)
To install the package via pip run
pip3 install artifacts/*.whl
To test out the package run
cd ..
python3 -c 'import tfg_custom_ops'
You may use this software under the Apache 2.0 License.