Skip to content

Commit

Permalink
Release v0.0.4.
Browse files Browse the repository at this point in the history
  • Loading branch information
alexmillane committed Apr 6, 2023
1 parent 33c0089 commit 7f72f2f
Show file tree
Hide file tree
Showing 351 changed files with 17,912 additions and 7,735 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@

# Python
*.pyc
**/*.egg-info
**/__pycache__

# Reconstructions
*.ply
Expand Down
14 changes: 12 additions & 2 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,11 @@ pipeline {
sh '''cd nvblox/build && cmake .. -DCMAKE_INSTALL_PREFIX=../install && make clean && make -j8 && make install'''
}
}
stage('Lint') {
steps {
sh '''bash nvblox/lint/lint_nvblox_h.sh'''
}
}
stage('Test x86') {
steps {
sh '''cd nvblox/build/tests && ctest -T test --no-compress-output'''
Expand Down Expand Up @@ -69,10 +74,10 @@ pipeline {
}
}
}
stage("Jetson 5.0.2") {
stage("Jetson 5.1.1") {
agent {
dockerfile {
label 'jetson-5.0.2'
label 'jp-5.1.1'
reuseNode true
filename 'docker/Dockerfile.jetson_deps'
args '-u root --runtime nvidia --gpus all -v /var/run/docker.sock:/var/run/docker.sock:rw'
Expand All @@ -86,6 +91,11 @@ pipeline {
sh '''cd nvblox/build && cmake .. -DCMAKE_INSTALL_PREFIX=../install && make clean && make -j8 && make install'''
}
}
stage('Lint') {
steps {
sh '''bash nvblox/lint/lint_nvblox_h.sh'''
}
}
stage('Test Jetson') {
steps {
sh '''cd nvblox/build/tests && ctest -T test --no-compress-output'''
Expand Down
122 changes: 92 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,75 @@
# nvblox
# nvblox ![nvblox_logo](docs/images/nvblox_logo_64.png)

Signed Distance Functions (SDFs) on NVIDIA GPUs.

<div align="left"><img src="docs/images/nvblox_logo.png" width=256px/></div>
<div align="center"><img src="docs/images/3dmatch.gif" width=600px/></div>


An SDF library which offers
* Support for storage of various voxel types
* GPU accelerated agorithms such as:
A GPU SDF library which offers
* GPU accelerated algorithms such as:
* TSDF construction
* Occupancy mapping
* ESDF construction
* Meshing
* ROS2 interface (see [isaac_ros_nvblox](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox))
* ~~Python bindings~~ (coming soon)
* ROS 2 interface (see [isaac_ros_nvblox](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox))
* Support for storage of various voxel types, and easily extended to custom voxel types.

Above we show reconstruction using data from the [3DMatch dataset](https://3dmatch.cs.princeton.edu/), specifically the [Sun3D](http://sun3d.cs.princeton.edu/) `mit_76_studyroom` scene.

## Table of Contents

- [nvblox ](#nvblox-)
- [Table of Contents](#table-of-contents)
- [Why nvblox?](#why-nvblox)
- [How to use nvblox](#how-to-use-nvblox)
- [Out-of-the-box Reconstruction/ROS 2 Interface](#out-of-the-box-reconstructionros-2-interface)
- [Public Datasets](#public-datasets)
- [C++ Interface](#c-interface)
- [Native Installation](#native-installation)
- [Install dependencies](#install-dependencies)
- [Build and run tests](#build-and-run-tests)
- [Run an example](#run-an-example)
- [Docker](#docker)
- [Additional instructions for Jetson Xavier](#additional-instructions-for-jetson-xavier)
- [Open3D on Jetson](#open3d-on-jetson)
- [Building for multiple GPU architectures](#building-for-multiple-gpu-architectures)
- [Building redistributable binaries, with static dependencies](#building-redistributable-binaries-with-static-dependencies)
- [License](#license)

# Why nvblox?

Do we need another SDF library? That depends on your use case. If you're interested in:
* **Path planning**: We provide GPU accelerated, incremental algorithms for calculating the Euclidian Signed Distance Field (ESDF) which is useful for colision checking and therefore robotic pathplanning. In contrast, existing GPU-accelerated libraries target reconstruction only, and are therefore generally not useful in a robotics context.
* **GPU acceleration**: Our previous works [voxblox](https://github.com/ethz-asl/voxblox) and [voxgraph](https://github.com/ethz-asl/voxgraph) are used for path planning, however utilize CPU compute only, which limits the speed of these toolboxes (and therefore the resolution of the maps they can build in real-time).
* **Path planning**: We provide GPU accelerated, incremental algorithms for calculating the Euclidean Signed Distance Field (ESDF) which is useful for collision checking for robotic path-planning.
* **GPU acceleration**: Our previous works [voxblox](https://github.com/ethz-asl/voxblox) and [voxgraph](https://github.com/ethz-asl/voxgraph) are used for path-planning, however utilize CPU compute only, which limits the speed of these toolboxes, and therefore the resolution of the maps they can build in real-time. nvblox is *much* faster.
* **Jetson Platform**: nvblox is written with the [NVIDIA jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) in mind. If you want to run reconstruction on an embedded GPU, you're in the right place.

Here we show slices through a distance function generated from *nvblox* using data from the [3DMatch dataset](https://3dmatch.cs.princeton.edu/), specifically the [Sun3D](http://sun3d.cs.princeton.edu/) `mit_76_studyroom` scene:
Below we visualize slices through a distance function (ESDF):

<div align="center"><img src="docs/images/nvblox_slice.gif" width=600px/></div>

# Note from the authors
This package is under active development. Feel free to make an issue for bugs or feature requests, and we always welcome pull requests!

# ROS2 Interface
This repo contains the core library which can be linked into users' projects. If you want to use nvblox on a robot out-of-the-box, please see our [ROS2 interface](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox), which downloads and builds the core library during installation.
# How to use nvblox
How use nvblox depends on what you want to do.

## Out-of-the-box Reconstruction/ROS 2 Interface

For users who would like to use nvblox in a robotic system or connect easily to a sensor, we suggest using our [ROS 2 interface](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox).

The ROS 2 interface includes examples which allow you to:
* Build a reconstruction from a realsense camera using nvblox and NVIDIA VSLAM [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md).
* Navigate a robot in Isaac Sim [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-isaac-sim.md).
* Combine 3D reconstruction with image segmentation with [realsense data](https://gitlab-master.nvidia.com/isaac_ros/isaac_ros_nvblox/-/blob/envoy-dev/docs/tutorial-human-reconstruction-realsense.md) and in [simulation](https://gitlab-master.nvidia.com/isaac_ros/isaac_ros_nvblox/-/blob/envoy-dev/docs/tutorial-human-reconstruction-isaac-sim.md).

The ROS 2 interface downloads and builds the library contained in this repository during installation, so you don't need to clone and build this repository at all.

## Public Datasets

If you would like to run nvblox on a public datasets, we include some executables for running reconstructions on [3DMatch](https://3dmatch.cs.princeton.edu/), [Replica](https://github.com/facebookresearch/Replica-Dataset), and [Redwood](http://redwood-data.org/indoor_lidar_rgbd/index.html) datasets. Please see our [tutorial](./docs/pages/tutorial_public_datasets.md) on running these.

## C++ Interface

If you want to build nvblox into a larger project, without ROS, or you would like to make modifications to nvblox's core reconstruction features, this repository contains the code you need. Our [tutorial](./docs/pages/tutorial_library_interface.md) provides some brief details of how to interact with the reconstruction in c++.


# Native Installation
If you want to build natively, please follow these instructions. Instructions for docker are [further below](#docker).
Expand All @@ -33,10 +78,11 @@ If you want to build natively, please follow these instructions. Instructions fo
We depend on:
- gtest
- glog
- gflags (to run experiments)
- CUDA 11.0 - 11.6 (others might work but are untested)
- gflags
- SQLite 3
- CUDA 11.0 - 11.8 (others might work but are untested)
- Eigen (no need to explicitly install, a recent version is built into the library)
- SQLite 3 (for serialization)
- stdgpu (downloaded during compilation)
Please run
```
sudo apt-get install -y libgoogle-glog-dev libgtest-dev libgflags-dev python3-dev libsqlite3-dev
Expand All @@ -60,12 +106,11 @@ unzip ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -d ~/datasets
Navigate to and run the `fuse_3dmatch` binary. From the nvblox base folder run
```
cd nvblox/build/executables
./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ --esdf_frame_subsampling 3000 --mesh_output_path mesh.ply
./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ mesh.ply
```
Once it's done we can view the output mesh using the Open3D viewer.
Once it's done we can view the output mesh using the Open3D viewer. Instructions for installing open3d-viewer can be found below.
```
pip3 install open3d
python3 ../../visualization/visualize_mesh.py mesh.ply
Open3D mesh.ply
```
you should see a mesh of a room:
<div align="center"><img src="docs/images/reconstruction_in_docker_trim.png" width=600px/></div>
Expand All @@ -88,7 +133,7 @@ We have several dockerfiles (in the `docker` subfolder) which layer on top of on
* * Runs ours tests.
* * Useful for checking if things are likely to pass the tests in CI.

We are reliant on nvidia docker. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) following the instructions on that website.
We rely on nvidia docker. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) following the instructions on that website.

We use the GPU during build, not only at run time. In the default configuration the GPU is only used at at runtime. One must therefore set the default runtime. Add `"default-runtime": "nvidia"` to `/etc/docker/daemon.json` such that it looks like:
```
Expand All @@ -106,10 +151,12 @@ Restart docker
```
sudo systemctl restart docker
```
Now Let's build Dockerfile.deps docker image. This image install contains our dependencies. (In case you are running this on the Jetson, simply substitute docker/`Dockerfile.jetson_deps` below and the rest of the instructions remain the same.
Now Let's build Dockerfile.deps docker image. This image install contains our dependencies.
```
docker build -t nvblox_deps -f docker/Dockerfile.deps .
```
> In case you are running this on the Jetson, substitute the dockerfile: `docker/Dockerfile.jetson_deps`
Now let's build the Dockerfile.build. This image layers on the last, and actually builds the nvblox library.
```
docker build -t nvblox -f docker/Dockerfile.build .
Expand All @@ -125,15 +172,17 @@ apt-get update
apt-get install unzip
wget http://vision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets/sun3d-mit_76_studyroom-76-1studyroom2.zip -P ~/datasets/3dmatch
unzip ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -d ~/datasets/3dmatch
cd nvblox/nvblox/build/executables
./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ --esdf_frame_subsampling 3000 --mesh_output_path mesh.ply
cd nvblox/nvblox/build/executables/
./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ mesh.ply
```
Now let's visualize. From the same executable folder run:
```
apt-get install python3-pip libgl1-mesa-glx
pip3 install open3d
python3 ../../visualization/visualize_mesh.py mesh.ply
apt-get install libgl1-mesa-glx libc++1 libc++1-10 libc++abi1-10 libglfw3 libpng16-16
wget https://github.com/isl-org/Open3D/releases/download/v0.13.0/open3d-app-0.13.0-Ubuntu_20.04.deb
dpkg -i open3d-app-0.13.0-Ubuntu_20.04.deb
Open3D mesh.ply
```
to visualize on the jetson see [below](#open3d-on-jetson).

# Additional instructions for Jetson Xavier
These instructions are for a native build on the Jetson Xavier. You can see the instructions above for running in docker.
Expand All @@ -149,7 +198,7 @@ wget -qO - https://apt.kitware.com/keys/kitware-archive-latest.asc |
```
2. Add the repository to your sources list and update.
```
sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main'
sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main'
sudo apt-get update
```
3. Update!
Expand All @@ -161,14 +210,27 @@ sudo apt-get install cmake
export OPENBLAS_CORETYPE=ARMV8
```

## Open3D on Jetson
Open3D is available pre-compiled for the jetson ([details here](http://www.open3d.org/docs/release/arm.html)). Install via pip:
```
apt-get install python3-pip
pip3 install open3d==0.16.0
```
> If version `0.16.0` is not available you need to upgrade your pip with `pip3 install -U pip`. You may additionally need to add the upgraded pip version to your path.
View the mesh via:
```
open3d draw mesh.ply
```

# Building for multiple GPU architectures
By default, the library builds ONLY for the compute capability (CC) of the machine it's being built on. To build binaries that can be used across multiple machines (i.e., pre-built binaries for CI, for example), you can use the `BUILD_FOR_ALL_ARCHS` flag and set it to true. Example:
```
cmake .. -DBUILD_FOR_ALL_ARCHS=True -DCMAKE_INSTALL_PREFIX=../install/ && make -j8 && make install
```

# Building redistributable binaries, with static dependencies
If you want to include nvblox in another CMake project, simply `find_package(nvblox)` should bring in the correct libraries and headers. However, if you want to include it in a different build system such as Bazel, you can see the instructions here: [docs/redistibutable.md].
If you want to include nvblox in another CMake project, simply `find_package(nvblox)` should bring in the correct libraries and headers. However, if you want to include it in a different build system such as Bazel, you can see the instructions [here](./docs/pages/redistibutable.md).

# License
This code is under an [open-source license](LICENSE) (Apache 2.0). :)
23 changes: 2 additions & 21 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,34 +19,15 @@
}

extensions = [
'sphinx.ext.autosectionlabel', 'myst_parser', #'breathe', 'exhale',
'sphinx.ext.autosectionlabel'
]

project = name
master_doc = 'root'

# html_theme_options = {'logo_only': True}
html_extra_path = ['doxyoutput/html']


# # Setup the breathe extension
# breathe_projects = {"project": "./doxyoutput/xml"}
# breathe_default_project = "project"

# # Setup the exhale extension
# exhale_args = {
# "verboseBuild": False,
# "containmentFolder": "./api",
# "rootFileName": "library_root.rst",
# "rootFileTitle": "Library API",
# "doxygenStripFromPath": "..",
# "createTreeView": True,
# "exhaleExecutesDoxygen": True, # SWITCH TO TRUE
# "exhaleUseDoxyfile": True, # SWITCH TO TRUE
# "pageLevelConfigMeta": ":github_url: https://github.com/nvidia-isaac/" + name
# }

source_suffix = ['.rst', '.md']
source_suffix = ['.md']

# Tell sphinx what the primary language being documented is.
primary_domain = 'cpp'
Expand Down
Binary file added docs/images/3dmatch.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/redwood_apartment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/replica_office0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/pages/technical.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Input/Outputs

Here we discuss the inputs you have to provide to nvblox, and the outputs it produces for downstream tasks. This is the default setup within ROS2 for 2D navigation, but note that other outputs are possible (such as the full 3D distance map).
Here we discuss the inputs you have to provide to nvblox, and the outputs it produces for downstream tasks. This is the default setup within ROS 2 for 2D navigation, but note that other outputs are possible (such as the full 3D distance map).

_Inputs_:
* **Depth Images**: (@ref nvblox::Image) We require input from a sensor supplying depth per pixel. Examples of such sensors are the Intel Realsense series and Kinect cameras.
Expand Down
Loading

0 comments on commit 7f72f2f

Please sign in to comment.