Skip to content

Commit

Permalink
repo refactoring (#61)
Browse files Browse the repository at this point in the history
See the branch "2024_Oct" for a backup of the old master branch

Co-authored-by: Chengzhe Xu <[email protected]>
  • Loading branch information
chengzhe-xu and Chengzhe Xu authored Oct 23, 2024
1 parent 2bf9d5b commit 8ebb475
Show file tree
Hide file tree
Showing 220 changed files with 18 additions and 51,977 deletions.
60 changes: 0 additions & 60 deletions .bazelrc

This file was deleted.

107 changes: 0 additions & 107 deletions .clang-format

This file was deleted.

31 changes: 0 additions & 31 deletions .clang-tidy

This file was deleted.

127 changes: 0 additions & 127 deletions .dazelrc

This file was deleted.

9 changes: 0 additions & 9 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,14 +1,5 @@
bazel*
docker/DRIVE/pdk_files/*
docker/DRIVE/qnx_toolchain/*
docker/Jetson/jetpack_files/*
.vscode/
.dazel_run
.nvvp
__pycache__/
# The experiments space can be used to keep things you
# don't want to push up but want to use the build system
experiments/
.DS_Store
._.DS_Store
._*
27 changes: 0 additions & 27 deletions .style.yapf

This file was deleted.

3 changes: 3 additions & 0 deletions AV-Solutions/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# Autonomous Vehicle Solutions
This folder contains samples for autonomous vehicle on NVIDIA DRIVE platform, including deployment of SOTA methods with TensorRT and inference application design. More is on the way. Please stay tuned.

## Sparsity in INT8
[Sparsity in INT8](./SparsityINT8/) contains the PyTorch codebase for sparsity INT8 training and TensorRT inference, demonstrating the workflow for leveraging both structured sparsity and quantization for more efficient deployment. Please refer to ["Sparsity in INT8: Training Workflow and Best Practices for NVIDIA TensorRT Acceleration"](https://developer.nvidia.com/blog/sparsity-in-int8-training-workflow-and-best-practices-for-tensorrt-acceleration/) for more details..

## Multi-task model inference on multiple devices
[Multi-task model inference on multiple devices](./mtmi/) is to demonstrate the deployment of a multi-task network on NVIDIA Drive Orin platform using both GPU and DLA. Please refer to our webinar on [Optimizing Multi-task Model Inference for Autonomous Vehicles](https://www.nvidia.com/en-us/on-demand/session/other2024-inferenceauto/)

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit 8ebb475

Please sign in to comment.