Collective Knowledge repository for collaboratively benchmarking and optimising embedded deep vision runtime library for Jetson TX1
CK-TensorRT is an open framework for collaborative and reproducible optimisation of convolutional neural networks for Jetson TX1. It's based on the Deep Inference framework from the Dustin Franklin (Jetson Developer @NVIDIA) and the Collective Knowledge framework from the cTuning Foundation. In essence, CK-TensorRT is simply a suite of convenient wrappers for building, evaluating and optimising performance of Jetson Inference runtime library for Jetson TX1.
TBD
$ sudo apt install coreutils \
build-essential \
make \
cmake \
wget \
git \
python \
python-pip
TBD
$ sudo pip install ck
$ ck version
$ ck pull repo:ck-tensorrt --url=https://github.com/dividiti/ck-tensorrt
The first time you run a TensorRT program (e.g. tensorrt-test
), CK will
build and install all missing dependencies on your machine,
download the required data sets and start the benchmark:
$ ck run program:tensorrt-test
We are working with the community to unify and crowdsource performance analysis and tuning of various DNN frameworks (or any realistic workload) using Collective Knowledge Technology:
- CK-Caffe
- CK-TinyDNN
- Android app for DNN crowd-benchmarking and crowd-tuning
- CK-powered ARM workload automation
We use crowd-benchmarking and crowd-tuning of such realistic workloads across diverse hardware for open academic and industrial R&D challenges - join this community effort!