diff --git a/.gitignore b/.gitignore index eff634ad..28e00602 100644 --- a/.gitignore +++ b/.gitignore @@ -43,6 +43,8 @@ # Python *.pyc +**/*.egg-info +**/__pycache__ # Reconstructions *.ply diff --git a/Jenkinsfile b/Jenkinsfile index b764229b..9402f779 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -23,6 +23,11 @@ pipeline { sh '''cd nvblox/build && cmake .. -DCMAKE_INSTALL_PREFIX=../install && make clean && make -j8 && make install''' } } + stage('Lint') { + steps { + sh '''bash nvblox/lint/lint_nvblox_h.sh''' + } + } stage('Test x86') { steps { sh '''cd nvblox/build/tests && ctest -T test --no-compress-output''' @@ -69,10 +74,10 @@ pipeline { } } } - stage("Jetson 5.0.2") { + stage("Jetson 5.1.1") { agent { dockerfile { - label 'jetson-5.0.2' + label 'jp-5.1.1' reuseNode true filename 'docker/Dockerfile.jetson_deps' args '-u root --runtime nvidia --gpus all -v /var/run/docker.sock:/var/run/docker.sock:rw' @@ -86,6 +91,11 @@ pipeline { sh '''cd nvblox/build && cmake .. -DCMAKE_INSTALL_PREFIX=../install && make clean && make -j8 && make install''' } } + stage('Lint') { + steps { + sh '''bash nvblox/lint/lint_nvblox_h.sh''' + } + } stage('Test Jetson') { steps { sh '''cd nvblox/build/tests && ctest -T test --no-compress-output''' diff --git a/README.md b/README.md index 3fee1530..59200b77 100644 --- a/README.md +++ b/README.md @@ -1,30 +1,75 @@ -# nvblox +# nvblox ![nvblox_logo](docs/images/nvblox_logo_64.png) + Signed Distance Functions (SDFs) on NVIDIA GPUs. -
+
+ -An SDF library which offers -* Support for storage of various voxel types -* GPU accelerated agorithms such as: +A GPU SDF library which offers +* GPU accelerated algorithms such as: * TSDF construction + * Occupancy mapping * ESDF construction * Meshing -* ROS2 interface (see [isaac_ros_nvblox](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox)) -* ~~Python bindings~~ (coming soon) +* ROS 2 interface (see [isaac_ros_nvblox](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox)) +* Support for storage of various voxel types, and easily extended to custom voxel types. + +Above we show reconstruction using data from the [3DMatch dataset](https://3dmatch.cs.princeton.edu/), specifically the [Sun3D](http://sun3d.cs.princeton.edu/) `mit_76_studyroom` scene. + +## Table of Contents + +- [nvblox ](#nvblox-) + - [Table of Contents](#table-of-contents) +- [Why nvblox?](#why-nvblox) +- [How to use nvblox](#how-to-use-nvblox) + - [Out-of-the-box Reconstruction/ROS 2 Interface](#out-of-the-box-reconstructionros-2-interface) + - [Public Datasets](#public-datasets) + - [C++ Interface](#c-interface) +- [Native Installation](#native-installation) + - [Install dependencies](#install-dependencies) + - [Build and run tests](#build-and-run-tests) + - [Run an example](#run-an-example) +- [Docker](#docker) +- [Additional instructions for Jetson Xavier](#additional-instructions-for-jetson-xavier) + - [Open3D on Jetson](#open3d-on-jetson) +- [Building for multiple GPU architectures](#building-for-multiple-gpu-architectures) +- [Building redistributable binaries, with static dependencies](#building-redistributable-binaries-with-static-dependencies) +- [License](#license) + +# Why nvblox? Do we need another SDF library? That depends on your use case. If you're interested in: -* **Path planning**: We provide GPU accelerated, incremental algorithms for calculating the Euclidian Signed Distance Field (ESDF) which is useful for colision checking and therefore robotic pathplanning. In contrast, existing GPU-accelerated libraries target reconstruction only, and are therefore generally not useful in a robotics context. -* **GPU acceleration**: Our previous works [voxblox](https://github.com/ethz-asl/voxblox) and [voxgraph](https://github.com/ethz-asl/voxgraph) are used for path planning, however utilize CPU compute only, which limits the speed of these toolboxes (and therefore the resolution of the maps they can build in real-time). +* **Path planning**: We provide GPU accelerated, incremental algorithms for calculating the Euclidean Signed Distance Field (ESDF) which is useful for collision checking for robotic path-planning. +* **GPU acceleration**: Our previous works [voxblox](https://github.com/ethz-asl/voxblox) and [voxgraph](https://github.com/ethz-asl/voxgraph) are used for path-planning, however utilize CPU compute only, which limits the speed of these toolboxes, and therefore the resolution of the maps they can build in real-time. nvblox is *much* faster. +* **Jetson Platform**: nvblox is written with the [NVIDIA jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) in mind. If you want to run reconstruction on an embedded GPU, you're in the right place. -Here we show slices through a distance function generated from *nvblox* using data from the [3DMatch dataset](https://3dmatch.cs.princeton.edu/), specifically the [Sun3D](http://sun3d.cs.princeton.edu/) `mit_76_studyroom` scene: +Below we visualize slices through a distance function (ESDF):
-# Note from the authors -This package is under active development. Feel free to make an issue for bugs or feature requests, and we always welcome pull requests! -# ROS2 Interface -This repo contains the core library which can be linked into users' projects. If you want to use nvblox on a robot out-of-the-box, please see our [ROS2 interface](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox), which downloads and builds the core library during installation. +# How to use nvblox +How use nvblox depends on what you want to do. + +## Out-of-the-box Reconstruction/ROS 2 Interface + +For users who would like to use nvblox in a robotic system or connect easily to a sensor, we suggest using our [ROS 2 interface](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox). + +The ROS 2 interface includes examples which allow you to: +* Build a reconstruction from a realsense camera using nvblox and NVIDIA VSLAM [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md). +* Navigate a robot in Isaac Sim [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-isaac-sim.md). +* Combine 3D reconstruction with image segmentation with [realsense data](https://gitlab-master.nvidia.com/isaac_ros/isaac_ros_nvblox/-/blob/envoy-dev/docs/tutorial-human-reconstruction-realsense.md) and in [simulation](https://gitlab-master.nvidia.com/isaac_ros/isaac_ros_nvblox/-/blob/envoy-dev/docs/tutorial-human-reconstruction-isaac-sim.md). + +The ROS 2 interface downloads and builds the library contained in this repository during installation, so you don't need to clone and build this repository at all. + +## Public Datasets + +If you would like to run nvblox on a public datasets, we include some executables for running reconstructions on [3DMatch](https://3dmatch.cs.princeton.edu/), [Replica](https://github.com/facebookresearch/Replica-Dataset), and [Redwood](http://redwood-data.org/indoor_lidar_rgbd/index.html) datasets. Please see our [tutorial](./docs/pages/tutorial_public_datasets.md) on running these. + +## C++ Interface + +If you want to build nvblox into a larger project, without ROS, or you would like to make modifications to nvblox's core reconstruction features, this repository contains the code you need. Our [tutorial](./docs/pages/tutorial_library_interface.md) provides some brief details of how to interact with the reconstruction in c++. + # Native Installation If you want to build natively, please follow these instructions. Instructions for docker are [further below](#docker). @@ -33,10 +78,11 @@ If you want to build natively, please follow these instructions. Instructions fo We depend on: - gtest - glog -- gflags (to run experiments) -- CUDA 11.0 - 11.6 (others might work but are untested) +- gflags +- SQLite 3 +- CUDA 11.0 - 11.8 (others might work but are untested) - Eigen (no need to explicitly install, a recent version is built into the library) -- SQLite 3 (for serialization) +- stdgpu (downloaded during compilation) Please run ``` sudo apt-get install -y libgoogle-glog-dev libgtest-dev libgflags-dev python3-dev libsqlite3-dev @@ -60,12 +106,11 @@ unzip ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -d ~/datasets Navigate to and run the `fuse_3dmatch` binary. From the nvblox base folder run ``` cd nvblox/build/executables -./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ --esdf_frame_subsampling 3000 --mesh_output_path mesh.ply +./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ mesh.ply ``` -Once it's done we can view the output mesh using the Open3D viewer. +Once it's done we can view the output mesh using the Open3D viewer. Instructions for installing open3d-viewer can be found below. ``` -pip3 install open3d -python3 ../../visualization/visualize_mesh.py mesh.ply +Open3D mesh.ply ``` you should see a mesh of a room:
@@ -88,7 +133,7 @@ We have several dockerfiles (in the `docker` subfolder) which layer on top of on * * Runs ours tests. * * Useful for checking if things are likely to pass the tests in CI. -We are reliant on nvidia docker. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) following the instructions on that website. +We rely on nvidia docker. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) following the instructions on that website. We use the GPU during build, not only at run time. In the default configuration the GPU is only used at at runtime. One must therefore set the default runtime. Add `"default-runtime": "nvidia"` to `/etc/docker/daemon.json` such that it looks like: ``` @@ -106,10 +151,12 @@ Restart docker ``` sudo systemctl restart docker ``` -Now Let's build Dockerfile.deps docker image. This image install contains our dependencies. (In case you are running this on the Jetson, simply substitute docker/`Dockerfile.jetson_deps` below and the rest of the instructions remain the same. +Now Let's build Dockerfile.deps docker image. This image install contains our dependencies. ``` docker build -t nvblox_deps -f docker/Dockerfile.deps . ``` +> In case you are running this on the Jetson, substitute the dockerfile: `docker/Dockerfile.jetson_deps` + Now let's build the Dockerfile.build. This image layers on the last, and actually builds the nvblox library. ``` docker build -t nvblox -f docker/Dockerfile.build . @@ -125,15 +172,17 @@ apt-get update apt-get install unzip wget http://vision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets/sun3d-mit_76_studyroom-76-1studyroom2.zip -P ~/datasets/3dmatch unzip ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -d ~/datasets/3dmatch -cd nvblox/nvblox/build/executables -./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ --esdf_frame_subsampling 3000 --mesh_output_path mesh.ply +cd nvblox/nvblox/build/executables/ +./fuse_3dmatch ~/datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ mesh.ply ``` Now let's visualize. From the same executable folder run: ``` -apt-get install python3-pip libgl1-mesa-glx -pip3 install open3d -python3 ../../visualization/visualize_mesh.py mesh.ply +apt-get install libgl1-mesa-glx libc++1 libc++1-10 libc++abi1-10 libglfw3 libpng16-16 +wget https://github.com/isl-org/Open3D/releases/download/v0.13.0/open3d-app-0.13.0-Ubuntu_20.04.deb +dpkg -i open3d-app-0.13.0-Ubuntu_20.04.deb +Open3D mesh.ply ``` +to visualize on the jetson see [below](#open3d-on-jetson). # Additional instructions for Jetson Xavier These instructions are for a native build on the Jetson Xavier. You can see the instructions above for running in docker. @@ -149,7 +198,7 @@ wget -qO - https://apt.kitware.com/keys/kitware-archive-latest.asc | ``` 2. Add the repository to your sources list and update. ``` -sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' +sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main' sudo apt-get update ``` 3. Update! @@ -161,6 +210,19 @@ sudo apt-get install cmake export OPENBLAS_CORETYPE=ARMV8 ``` +## Open3D on Jetson +Open3D is available pre-compiled for the jetson ([details here](http://www.open3d.org/docs/release/arm.html)). Install via pip: +``` +apt-get install python3-pip +pip3 install open3d==0.16.0 +``` +> If version `0.16.0` is not available you need to upgrade your pip with `pip3 install -U pip`. You may additionally need to add the upgraded pip version to your path. + +View the mesh via: +``` +open3d draw mesh.ply +``` + # Building for multiple GPU architectures By default, the library builds ONLY for the compute capability (CC) of the machine it's being built on. To build binaries that can be used across multiple machines (i.e., pre-built binaries for CI, for example), you can use the `BUILD_FOR_ALL_ARCHS` flag and set it to true. Example: ``` @@ -168,7 +230,7 @@ cmake .. -DBUILD_FOR_ALL_ARCHS=True -DCMAKE_INSTALL_PREFIX=../install/ && make - ``` # Building redistributable binaries, with static dependencies -If you want to include nvblox in another CMake project, simply `find_package(nvblox)` should bring in the correct libraries and headers. However, if you want to include it in a different build system such as Bazel, you can see the instructions here: [docs/redistibutable.md]. +If you want to include nvblox in another CMake project, simply `find_package(nvblox)` should bring in the correct libraries and headers. However, if you want to include it in a different build system such as Bazel, you can see the instructions [here](./docs/pages/redistibutable.md). # License This code is under an [open-source license](LICENSE) (Apache 2.0). :) diff --git a/docs/conf.py b/docs/conf.py index e8a29822..af2a69e2 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -19,34 +19,15 @@ } extensions = [ - 'sphinx.ext.autosectionlabel', 'myst_parser', #'breathe', 'exhale', + 'sphinx.ext.autosectionlabel' ] project = name master_doc = 'root' -# html_theme_options = {'logo_only': True} html_extra_path = ['doxyoutput/html'] - -# # Setup the breathe extension -# breathe_projects = {"project": "./doxyoutput/xml"} -# breathe_default_project = "project" - -# # Setup the exhale extension -# exhale_args = { -# "verboseBuild": False, -# "containmentFolder": "./api", -# "rootFileName": "library_root.rst", -# "rootFileTitle": "Library API", -# "doxygenStripFromPath": "..", -# "createTreeView": True, -# "exhaleExecutesDoxygen": True, # SWITCH TO TRUE -# "exhaleUseDoxyfile": True, # SWITCH TO TRUE -# "pageLevelConfigMeta": ":github_url: https://github.com/nvidia-isaac/" + name -# } - -source_suffix = ['.rst', '.md'] +source_suffix = ['.md'] # Tell sphinx what the primary language being documented is. primary_domain = 'cpp' diff --git a/docs/images/3dmatch.gif b/docs/images/3dmatch.gif new file mode 100644 index 00000000..3efce544 Binary files /dev/null and b/docs/images/3dmatch.gif differ diff --git a/docs/images/redwood_apartment.png b/docs/images/redwood_apartment.png new file mode 100644 index 00000000..746560cd Binary files /dev/null and b/docs/images/redwood_apartment.png differ diff --git a/docs/images/replica_office0.png b/docs/images/replica_office0.png new file mode 100644 index 00000000..425fff45 Binary files /dev/null and b/docs/images/replica_office0.png differ diff --git a/docs/pages/technical.md b/docs/pages/technical.md index c918f260..ef0d3d84 100644 --- a/docs/pages/technical.md +++ b/docs/pages/technical.md @@ -2,7 +2,7 @@ ## Input/Outputs -Here we discuss the inputs you have to provide to nvblox, and the outputs it produces for downstream tasks. This is the default setup within ROS2 for 2D navigation, but note that other outputs are possible (such as the full 3D distance map). +Here we discuss the inputs you have to provide to nvblox, and the outputs it produces for downstream tasks. This is the default setup within ROS 2 for 2D navigation, but note that other outputs are possible (such as the full 3D distance map). _Inputs_: * **Depth Images**: (@ref nvblox::Image) We require input from a sensor supplying depth per pixel. Examples of such sensors are the Intel Realsense series and Kinect cameras. diff --git a/docs/pages/tutorial_library_interface.md b/docs/pages/tutorial_library_interface.md new file mode 100644 index 00000000..88b149d9 --- /dev/null +++ b/docs/pages/tutorial_library_interface.md @@ -0,0 +1,142 @@ +# Library interface + +In this page give some brief details of how to interact with nvblox on a library level. For doxygen generated API docs see [our readthedocs page](https://nvblox.readthedocs.io/en/latest/index.html). + +## High-level Interface + +The top level interface is the `Mapper` class. + +```bash +const float voxel_size_m = 0.05; +const MemoryType memory_type = MemoryType::kDevice; +Mapper(voxel_size_s, memory_type); +``` + +This creates a mapper, which also allocates an empty map. Here we specify that voxels will be 5cm is size, and will be stored on the GPU (device). + +The mapper has methods for adding depth and color images to the reconstruction. + +```bash +mapper.integrateDepth(depth_image, T_L_C, camera); +``` + +The input image `depth_image`, the camera pose `T_L_C`, and the camera intrinsic model `camera` need to be supplied by the user of nvblox. + +The function call above integrates the observations into a 3D TSDF voxel grid. +The TSDF is rarely the final desired output and usually we would like to generate a Euclidian Signed Distance Function (ESDF) for pathplanning, or to generate a mesh to view the reconstruction, from the TSDF. +Mapper includes methods for doing this: + +```bash +mapper.updateEsdf(); +mapper.updateMesh(); +``` + +The word "update" here indicates that these functions don't generate the mesh or ESDF from scratch, but only update what's needed. + +We could then save the mesh to disk as a `.ply` file. + +```bash +io::outputMeshLayerToPly(mapper.mesh_layer(), "/path/to/my/cool/mesh.ply"); +``` + +## Accessing Voxels + +If you're using nvblox as a library you likely want to work with voxels directly. + +Voxels are stored in the class "Layer". A map is composed of multiple layers, which are co-located voxel grids which stored voxels of different types. +A typical map has for example TSDF, Color layers. + +Layer provides voxel accessor methods. + + +```cpp +void getVoxels(const std::vector& positions_L, + std::vector* voxels_ptr, + std::vector* success_flags_ptr) const; + +void getVoxelsGPU(const device_vector& positions_L, + device_vector* voxels_ptr, + device_vector* success_flags_ptr) const; +``` +These will return the caller with a vector of voxels on either the GPU or CPU. +The flags indicate whether the relevant voxel could be found (we only allocate voxels in memory when that area of space is observed). +If you request a voxel in unobserved space the lookup will fail and write a `false` to that entry in the `success_flags` vector. + +Calling these functions requires the GPU to run a kernel to retrieve voxels from the voxel grid and copy their values into the output vector. +In the `getVoxels` we additionally copy the voxel back from the GPU to host (CPU) memory. + +Getting voxels using the functions above is a multistep process internally. +The function has to: +* Call a kernel which translates query positions to voxel memory locations, +* Copies voxels into an output vector. +* We the optionally have to copy the output vector from device to host memory. + +Therefore, advanced users who want maximum query speed should access voxels directly inside a GPU kernel. +The next sections discusses this process. + +## Accessing Voxels on GPU + +If you want to write high performance code which uses voxel values directly, you'll likely want to access voxels in GPU kernels. + +We illustrate how this is done by a slightly simplified version of the `getVoxels` function described in the last section. + +```cpp +__global__ void queryVoxelsKernel( + int num_queries, Index3DDeviceHashMapType block_hash, + float block_size, const Vector3f* query_locations_ptr, + TsdfVoxel* voxels_ptr, bool* success_flags_ptr) { + const int idx = threadIdx.x + blockIdx.x * blockDim.x; + if (idx >= num_queries) { + return; + } + const Vector3f query_location = query_locations_ptr[idx]; + + TsdfVoxel* voxel; + if (!getVoxelAtPosition(block_hash, query_location, block_size, + &voxel)) { + success_flags_ptr[idx] = false; + } else { + success_flags_ptr[idx] = true; + voxels_ptr[idx] = *voxel; + } +} + +void getVoxelsGPU( + const TsdfLayer layer, + const device_vector& positions_L, + device_vector* voxels_ptr, + device_vector* success_flags_ptr) const { + + const int num_queries = positions_L.size(); + + voxels_ptr->resize(num_queries); + success_flags_ptr->resize(num_queries); + + constexpr int kNumThreads = 512; + const int num_blocks = num_queries / kNumThreads + 1; + + GPULayerView gpu_layer_view = layer.getGpuLayerView(); + + queryVoxelsKernel<<>>( + num_queries, gpu_layer_view.getHash().impl_, layer.block_size(), + positions_L.data(), voxels_ptr->data(), success_flags_ptr->data()); + checkCudaErrors(cudaDeviceSynchronize(cuda_stream)); + checkCudaErrors(cudaPeekAtLastError()); +} +``` + +The first critical thing that happens in the code above is that we get a GPU view of the hash table representing the map. + +```cpp +GPULayerView gpu_layer_view = layer.getGpuLayerView() +``` +The hash table is used in the kernel to transform 3D query locations into memory locations for voxels. + +Inside the kernel we have +```cpp +TsdfVoxel* voxel; +getVoxelAtPosition(block_hash, query_location, block_size, &voxel); +``` +which places a pointer to the voxel in `voxel` and returns true if the voxel has been allocated. + +For a small example application which queries voxels on the GPU see `/nvblox/examples/src/esdf_query.cu`. diff --git a/docs/pages/tutorial_public_datasets.md b/docs/pages/tutorial_public_datasets.md new file mode 100644 index 00000000..b910b4ec --- /dev/null +++ b/docs/pages/tutorial_public_datasets.md @@ -0,0 +1,61 @@ +# Public Datasets Tutorial + +If you would like to run nvblox on a public datasets, we include some executables for fusing [3DMatch](https://3dmatch.cs.princeton.edu/), [Replica](https://github.com/facebookresearch/Replica-Dataset), and [Redwood](http://redwood-data.org/indoor_lidar_rgbd/index.html) datasets. + +The executables are run by pointing the respective binary to a folder containing the dataset. We give details for each dataset below. + +## 3DMatch + +Instructions to run 3DMatch are given on the front page of the README [here](https://github.com/nvidia-isaac/nvblox#run-an-example). + +## Replica + +We use [Replica](https://github.com/facebookresearch/Replica-Dataset) sequences from the [NICE-SLAM](https://github.com/cvg/nice-slam). + +First download the dataset: + +```bash +cd ~/datasets +wget https://cvg-data.inf.ethz.ch/nice-slam/data/Replica.zip +unzip Replica.zip +``` + +Now run nvblox and output a mesh. + +```bash +cd nvblox/build/executables +./fuse_replica ~/datasets/Replica/office0 --voxel_size=0.02 --color_frame_subsampling=20 mesh.ply +``` +Note that here we specify via command line flags to run the reconstruction with 2cm voxels, and only to integrate 1 in 20 color frames. + +View the reconstruction in Open3D +```bash +Open3D mesh.ply +``` +
+ +## Redwood + +The replica RGB-D datasets are available [here](http://redwood-data.org/indoor_lidar_rgbd/download.html). + +Download the "RGB-D sequence" and "Our camera poses" at the link above. + +Extract the data into a common folder. For example for the apartment sequence the resultant folder structure looks like: +```bash +~/datasets/redwood/apartment +~/datasets/redwood/apartment/pose_apartment/... +~/datasets/redwood/apartment/rgbd_apartment/... +``` + +Now we run the reconstruction +```bash +cd nvblox/build/executables +./fuse_redwood ~/datasets/redwood/apartment --voxel_size=0.02 --color_frame_subsampling=20 mesh.ply +``` +Note this dataset is large (~30000 images) so the reconstruction can take a couple of minutes. + +View the reconstruction in Open3D +```bash +Open3D mesh.ply +``` +
diff --git a/nvblox/evaluation/replica/evaluation_utils/__init__.py b/docs/root.md similarity index 100% rename from nvblox/evaluation/replica/evaluation_utils/__init__.py rename to docs/root.md diff --git a/docs/root.rst b/docs/root.rst deleted file mode 100644 index 50cba87b..00000000 --- a/docs/root.rst +++ /dev/null @@ -1,8 +0,0 @@ -======= -Table of Contents -======= - -.. toctree:: - :maxdepth: 1 - :glob: - root diff --git a/docs/rst/examples/core_example.rst b/docs/rst/examples/core_example.rst deleted file mode 100644 index 49eb237e..00000000 --- a/docs/rst/examples/core_example.rst +++ /dev/null @@ -1,63 +0,0 @@ -==================== -Core Library Example -==================== - -In this example we fuse data from the `3DMatch dataset `_. The commands to run the example are slightly different depending on if you've installed :ref:`natively ` or in a :ref:`docker container `. - -Core Library Example - Native -============================= - -In this example we fuse data from the `3DMatch dataset `_. First let's grab the dataset. Here I'm downloading it to my dataset folder ``~/dataset/3dmatch``. :: - - wget http://vision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets//datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -P ~/datasets/3dmatch - unzip ~/datasets/3dmatch//datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -d ~/datasets/3dmatch - -Navigate to and run the ``fuse_3dmatch`` binary. From the nvblox base folder run:: - - cd nvblox/build/experiments - ./fuse_3dmatch ~/datasets/3dmatch//datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ --esdf_frame_subsampling 3000 --mesh_output_path mesh.ply - -Once it's done we can view the output mesh using the Open3D viewer. :: - - pip3 install open3d - python3 ../../visualization/visualize_mesh.py mesh.ply - -you should see a mesh of a room: - -.. _example result: -.. figure:: ../../images/reconstruction_in_docker_trim.png - :align: center - - The result of running the core library example. - - - - -Core Library Example - Docker -============================= - -Now let's run the 3DMatch example inside the docker. Note there's some additional complexity in the ``docker run`` command such that we can forward X11 to the host (we're going to be viewing a reconstruction in a GUI). Run the container using:: - - xhost local:docker - docker run -it --net=host --env="DISPLAY" -v $HOME/.Xauthority:/root/.Xauthority:rw -v /tmp/.X11-unix:/tmp/.X11-unix:rw nvblox - -Let's download a dataset and run the example:: - - apt-get update - apt-get install unzip - wget http://vision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets/sun3d-mit_76_studyroom-76-1studyroom2.zip -P ~/datasets/3dmatch - unzip ~/datasets/3dmatch//datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2.zip -d ~/datasets/3dmatch - cd nvblox/nvblox/build/experiments/ - ./fuse_3dmatch ~/datasets/3dmatch//datasets/3dmatch/sun3d-mit_76_studyroom-76-1studyroom2/ --esdf_frame_subsampling 3000 --mesh_output_path mesh.ply - -Now let's visualize. From the same experiments folder run:: - - apt-get install python3-pip libgl1-mesa-glx - pip3 install open3d - python3 ../../visualization/visualize_mesh.py mesh.ply - -You should see the :ref:`image above `. - - - - diff --git a/docs/rst/examples/index.rst b/docs/rst/examples/index.rst deleted file mode 100644 index 4ceec22b..00000000 --- a/docs/rst/examples/index.rst +++ /dev/null @@ -1,9 +0,0 @@ -======== -Examples -======== - -.. toctree:: - :maxdepth: 1 - - core_example - ros_example diff --git a/docs/rst/examples/ros_example.rst b/docs/rst/examples/ros_example.rst deleted file mode 100644 index ff7a36dd..00000000 --- a/docs/rst/examples/ros_example.rst +++ /dev/null @@ -1,86 +0,0 @@ -============ -ROS2 Example -============ - -In this example, we will use nvblox to build a reconstruction from simulation data streamed from `Isaac Sim `_. Data will flow from the simulator to nvblox using ROS2 and the `isaac_ros_nvblox `_ interface. - -.. _example result: -.. figure:: ../../images/nvblox_navigation_trim.gif - :align: center - -There are two ways to run nvblox in this example: - -* Inside a Docker container -* In a ROS2 workspace installed directly on your machine - -This example treats running docker as the default choice. - -Example Description -=================== - -In this example, Isaac Sim will run natively on your machine and communicate with nvblox running inside a Docker container. Running in Isaac Sim is referred to as running on the *host* machine, differentiating it from running inside the *Docker*. If using the native setup, both will run on the host machine. - -Isaac Sim Setup (Host Machine) -============================== - -Follow the standard instructions to install `Isaac Sim `_ -on the host machine. - -As part of the set-up, make sure to install a local Nucleus server (Nucleus manages simulation assets such as maps and objects), following the instructions `here `_. Mounting the Isaac share will give you access to the latest Isaac Sim samples, which these instructions use. Please also use the `latest URL for the mount `_ (rather than what's listed in the linked tutorial):: - - Name: Isaac - Type: Amazon S3 - Host: d28dzv1nop4bat.cloudfront.net - Service: s3 - Redirection: https://d28dzv1nop4bat.cloudfront.net - -You will launch Isaac Sim from Python scripts that automate the setup of the robot and environment. Isaac Sim uses its own python binary, -which pulls in the modules that are dependencies. To run the Isaac Sim simulation launch scripts, you will use the Isaac Sim Python binary, -which is located at ``~/.local/share/ov/pkg/{YOUR_ISAAC_SIM_VERSION}/python.sh`` - -For convenience, you can create an alias to this Python binary in your ``~/.bashrc``. Using the Isaac Sim version ``isaac_sim-2021.2.1-release.1`` -as an example, add the following line to ``~/.bashrc``:: - - alias omni_python='~/.local/share/ov/pkg/isaac_sim-2021.2.1-release.1/python.sh' - -.. note:: - Ensure ``isaac_sim-2021.2.1-release.1`` is the name of the Isaac Sim version installed on your system :: - -Now ``source`` the ``.bashrc`` to have access to this alias. :: - - source ~/.bashrc - -Running the Simulation (on the Host) and the Reconstruction (in the Docker) -=========================================================================== - -For this example, you will need two terminals. In the first terminal, you will run Isaac Sim. - -**Terminal 1**: Start up Isaac Sim with the correct sensors on the host machine:: - - omni_python ~/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_nvblox/nvblox_isaac_sim/omniverse_scripts/carter_warehouse.py - -.. note:: - Ensure there is no ROS workspace sourced in this terminal. - -.. note:: - If Isaac Sim reports not finding a Nucleus server, follow the instructions `here `_ to download the required assets. - -**Terminal 2:** In another terminal, start the ``isaac_ros-dev`` Docker :: - - ~/workspaces/isaac_ros-dev/scripts/run_dev.sh - -Source the ``ros_ws`` :: - - source /workspaces/isaac_ros-dev/ros_ws/install/setup.bash - -Run nvblox and ROS2 Nav2:: - - ros2 launch nvblox_nav2 carter_sim.launch.py - -You should see the robot reconstructing a mesh, with a costmap overlaid on top. To give it a command, you can select "2D Goal Pose" -in the command window at the top and select a goal in the main window. You should then see the robot plan a green path toward the -goal and navigate there, both in rviz and in simulation. - -.. _example result: -.. figure:: ../../images/readme_nav2.gif - :align: center diff --git a/docs/rst/index.rst b/docs/rst/index.rst deleted file mode 100644 index 384ed59d..00000000 --- a/docs/rst/index.rst +++ /dev/null @@ -1,30 +0,0 @@ -======= -Introduction to nvblox -======= - -Nvblox is a package for building a 3D reconstruction of the environment around your robot from sensor observations in real-time. The reconstruction is intended to be used by path planners to generate collision-free paths. Under the hood, nvblox uses NVIDIA CUDA to accelerate this task to allow operation at real-time rates. This repository contains ROS2 integration for the nvblox core library. - -|pic1| |pic2| - -.. |pic1| image:: ./images/reconstruction_in_docker_trim.png - :width: 45% - -.. |pic2| image:: /images/nvblox_navigation_trim.gif - :width: 45% - -**Left**: nvblox used for reconstruction on a scan from the `Sun3D Dataset `_. -**Right**: the nvblox ROS2 wrapper used to construct a costmap for `ROS2 Nav2 `_ for navigating of a robot inside `Isaac Sim `_. - -Nvblox is composed of two packages - -* `nvblox Core Library `_ Contains the core C++/CUDA reconstruction library. -* `nvblox ROS2 Interface `_ Contains a ROS2 wrapper and integrations for simulation and path planning. Internally builds the core library. - - - - -.. .. figure:: ./images/reconstruction_in_docker_trim.png -.. :width: 50 % -.. :align: center - -.. nvblox used for reconstruction on a scan from the `Sun3D Dataset http://sun3d.cs.princeton.edu/`_ diff --git a/docs/rst/installation/core.rst b/docs/rst/installation/core.rst deleted file mode 100644 index b96abfd2..00000000 --- a/docs/rst/installation/core.rst +++ /dev/null @@ -1,119 +0,0 @@ -========================= -Core Library Installation -========================= - -There are two ways to install the nvblox core library: :ref:`natively ` on your system or inside a :ref:`docker container `. - -Native Installation -=================== - -If you want to build natively, please follow these instructions. Instructions for docker are :ref:`further below `. - -Install dependencies --------------------- - -We depend on: - -* gtest -* glog -* gflags -* CUDA 10.2 - 11.5 (others might work but are untested) -* Eigen (no need to explicitly install, a recent version is built into the library) - -Please run:: - - sudo apt-get install -y libgoogle-glog-dev libgtest-dev libgflags-dev python3-dev - cd /usr/src/googletest && sudo cmake . && sudo cmake --build . --target install - -Build and run tests -------------------- -Build and run with:: - - cd nvblox/nvblox - mkdir build - cd build - cmake .. && make && ctest - -All tests should pass. - -Now you can run :ref:`core library example ` - - -Docker Installation -=================== - -We have several dockerfiles, each of which layers on top of the preceding one for the following purposes: - -* **Docker.deps** - - - This sets up the environment and installs our dependencies. - - This is used in our CI, where the later steps (building and testing) are taken care of by Jenkins (and not docker). -* **Docker.build** - - - Layers on top of Docker.deps. - - This builds our package. - - This is where you get off the layer train if you wanna run stuff (and don't care if it's tested). - -* **Docker.test** - - - Layers on top of Docker.build. - - Runs ours tests. - - Useful for checking, on your machine, if things are likely to pass the tests in CI. - -Install NVIDIA Container Toolkit --------------------------------- - -We are reliant on nvidia docker. Install the `NVIDIA Container Toolkit `_ following the instructions on that website. - -We use the GPU during build, not only at run time. In the default configuration the GPU is only used at at runtime. One must therefore set the default runtime. Add `"default-runtime": "nvidia"` to `/etc/docker/daemon.json` such that it looks like:: - - { - "runtimes": { - "nvidia": { - "path": "/usr/bin/nvidia-container-runtime", - "runtimeArgs": [] - } - }, - "default-runtime": "nvidia" - } - -Restart docker:: - - sudo systemctl restart docker - -Build the Image ---------------- - -Now Let's build Dockerfile.deps docker image. This image install contains our dependencies. :: - - docker build -t nvblox_deps -f Dockerfile.deps . - -Now let's build the Dockerfile.build. This image layers on the last, and actually builds the nvblox library. :: - - docker build -t nvblox -f Dockerfile.build . - -Now you can run :ref:`core library example ` - - -Additional instructions for Jetson Xavier -========================================= - -These instructions are for a **native** build on the Jetson Xavier. A Docker based build is coming soon. - -The instructions for the native build above work, with one exception: - -We build using CMake's modern CUDA integration and therefore require a more modern version of CMAKE than (currently) ships with jetpack. Luckily the cmake developer team provide a means obtaining recent versions of CMake through apt. - -1. Obtain a copy of the signing key:: - - wget -qO - https://apt.kitware.com/keys/kitware-archive-latest.asc | - sudo apt-key add - - -2. Add the repository to your sources list:: - - sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' - sudo apt-get update - -3. Update:: - - sudo apt-get install cmake diff --git a/docs/rst/installation/index.rst b/docs/rst/installation/index.rst deleted file mode 100644 index 134e5753..00000000 --- a/docs/rst/installation/index.rst +++ /dev/null @@ -1,12 +0,0 @@ -============ -Installation -============ - -There are two catagories of installation: - - -.. toctree:: - :maxdepth: 1 - - core - ros diff --git a/docs/rst/installation/ros.rst b/docs/rst/installation/ros.rst deleted file mode 100644 index 27c572fa..00000000 --- a/docs/rst/installation/ros.rst +++ /dev/null @@ -1,165 +0,0 @@ -================= -ROS2 Installation -================= - -If you want to use nvblox for navigation, or out-of-the-box in a robotic system the best way to do that is to use our `ROS2 wrappers `_. There's no need to install the core library if installing this way, ROS2 downloads and builds the core library before bundling it into the wrapper. - -Below is an example of the nvblox being used with ROS2 Nav2 for real-time reconstruction and navigation in Isaac Sim. - -.. _example navigation: -.. figure:: ../../images/nvblox_navigation_trim.gif - :align: center - -Packages in this repository -=========================== - -+------------------------------------+---------------+------------------------------------------------------+ -| nvblox ROS2 package | Description | -+====================================+===============+======================================================+ -| isaac_ros_nvblox | A meta-package. (Just a build target that builds a nvblox_ros and | -| | it's dependencies) | -+------------------------------------+---------------+------------------------------------------------------+ -| nvblox_isaac_sim | Contains scripts for launching Isaac Sim configured for use with | -| | nvblox. | -+------------------------------------+---------------+------------------------------------------------------+ -| nvblox_msgs | Custom messages for transmitting the output distance map slice and | -| | mesh over ROS2. | -+------------------------------------+---------------+------------------------------------------------------+ -| nvblox_nav2 | Contains a custom plugin that allows ROS2 Nav2 to consume nvblox | -| | distance map outputs, as well as launch files for launching a | -| | navigation solution for use in simulation. | -+------------------------------------+---------------+------------------------------------------------------+ -| nvblox_ros | The ROS2 wrapper for the core reconstruction library and the nvblox | -| | node. | -+------------------------------------+---------------+------------------------------------------------------+ -| nvblox_rviz_plugin | A plugin for displaying nvblox's (custom) mesh type in RVIZ. | -+------------------------------------+---------------+------------------------------------------------------+ -| [submodule] nvblox | The core (ROS independent) reconstruction library. | -+------------------------------------+---------------+------------------------------------------------------+ - -System Requirements -=================== -This Isaac ROS package is designed and tested to be compatible with ROS2 Foxy on x86 and Jetson hardware. - - -Jetson ------- -- `Jetson AGX Xavier or Xavier NX `_ -- `JetPack 4.6.1 `_ - -x86_64 ------- -- Ubuntu 20.04+ -- CUDA 11.4+ supported discrete GPU with 2+ GB of VRAM - -.. note:: - If running `Isaac Sim `_, more VRAM will be required to store the simulated world. - -.. note:: - For best performance on Jetson, ensure that power settings are configured appropriately (`Power Management for Jetson `_). - - -Installation Options -==================== - -There are two ways to build the ROS2 interface. Either :ref:`natively ` or inside a :ref:`Docker container `. Note that because precompiled ROS2 Foxy packages are not available for JetPack 4.6.1 (it's based on Ubuntu 18.04 Bionic), we recommend following the docker-based instructions if building on Jetson. - -Native Installation -=================== - -First, follow the instal the dependencies of the the core library:: - - sudo apt-get install -y libgoogle-glog-dev libgtest-dev libgflags-dev python3-dev - cd /usr/src/googletest && sudo cmake . && sudo cmake --build . --target install - -Additionally, you need `CUDA `_ version 10.2 - 11.5 installed. To make sure Linux finds CUDA on your machine, make sure something like the following is present in your `~/.bashrc`:: - - export PATH=/usr/local/cuda/bin${PATH:+:${PATH}} - export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/lib:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} - -Install ROS2 foxy using the `Debian instructions `_. - -.. caution:: - Sourcing ROS2 in your workspace automatically (i.e., in your ``.bashrc``) will cause Isaac Sim to break. We recommend creating an alias for sourcing your ROS2 workspace instead. - -To create an alias:: - - alias source_ros2="source /opt/ros/foxy/setup.bash;source ~/ros_ws/install/local_setup.bash" - -Check out the nvblox repo to a path like ``~/ros_ws/src``:: - - mkdir -p ~/ros_ws/src - git clone --recurse-submodules https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox - -Then, build the entire workspace:: - - cd ~/ros_ws/ && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release - -Again, we recommend creating an alias for the build command:: - - alias cn="sh -c 'cd ~/ros_ws/ && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release'" - -Now that nvblox is installed you can run the :ref:`navigation example `. - - - -Docker Installation -=================== - -A docker based build can be used on both x86 and Jetson platforms. However, there is a particular impetus to consider it for building on Jetson platforms. - -JetPack 4.6.1, which currently ships with Jetson, is based on Ubuntu 18.04, and nvblox requires ROS2 Foxy, which is targeted at Ubuntu 20.04. Therefore, to use nvblox on jetson you have two options: - -* manually compile ROS2 Foxy and required dependent packages from source -* or use the Isaac ROS development Docker image from `Isaac ROS Common `_. - -We recommend the second option. - -Nvidia Container Toolkit Setup ------------------------------- - -The Jetson issue aside, to use the Isaac ROS development Docker image, you must first install the `Nvidia Container Toolkit `__ to make use of the Docker container development/runtime environment. - -Configure ``nvidia-container-runtime`` as the default runtime for Docker by editing ``/etc/docker/daemon.json`` to include the following:: - - "runtimes": { - "nvidia": { - "path": "nvidia-container-runtime", - "runtimeArgs": [] - } - }, - "default-runtime": "nvidia" - -Then restart Docker: ``sudo systemctl daemon-reload && sudo systemctl restart docker`` - - -Isaac ROS Docker Setup ----------------------- - -Clone the ``isaac_ros_common`` repo into a folder on your system at ``~/workspaces/isaac_ros-dev/ros_ws/src``:: - - mkdir -p ~/workspaces/isaac_ros-dev/ros_ws/src - cd ~/workspaces/isaac_ros-dev/ros_ws/src - git clone --recurse-submodules https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git - -Clone the nvblox into ``~/workspaces/isaac_ros-dev/ros_ws/src``. This folder will be mapped by the docker container as a ROS workspace. :: - - cd ~/workspaces/isaac_ros-dev/ros_ws/src - git clone --recurse-submodules https://gitlab-master.nvidia.com/isaac_ros/isaac_ros_nvblox.git - -Start the Docker instance by running the start script:: - - ~/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/scripts/run_dev.sh - -Install the dependencies for your ROS workspace:: - - cd /workspaces/isaac_ros-dev/ros_ws - rosdep install -i -r --from-paths src --rosdistro foxy -y --skip-keys "libopencv-dev libopencv-contrib-dev libopencv-imgproc-dev python-opencv python3-opencv" - -To build the code, first navigate to ``/workspaces/isaac_ros-dev/ros_ws`` inside the Docker container, then use the following command:: - - colcon build --packages-up-to nvblox_nav2 nvblox_ros nvblox_msgs nvblox_rviz_plugin - -The build should pass. - -Now that nvblox is installed you can run the :ref:`navigation example `. diff --git a/docs/rst/integrators.rst b/docs/rst/integrators.rst deleted file mode 100644 index de3b60c8..00000000 --- a/docs/rst/integrators.rst +++ /dev/null @@ -1,21 +0,0 @@ -=========== -Integrators -=========== - -An integrator is, very generally, a class that modifies the content of a layers. - -The integrators currently offered can be split into two types: - -* Those which fuse incoming sensor data into a layer. For example, the :ref:`TsdfIntegrator ` and :ref:`ColorIntegrator ` fused depth images and color images into ``TsdfLayer`` and ``ColorLayer`` respectively. -* Those which transform the contents of one layer to update the data in another layer. For example the :ref:`EsdfIntegrator ` transforms an TSDF in a ``TsdfLayer`` into an ESDF in a ``EsdfLayer``. - -API -=== - -The API for the currently available integrators are here: - -* :ref:`TsdfIntegrator ` -* :ref:`ColorIntegrator ` -* :ref:`EsdfIntegrator ` -* :ref:`MeshIntegrator ` - diff --git a/docs/rst/map.rst b/docs/rst/map.rst deleted file mode 100644 index bcc12d90..00000000 --- a/docs/rst/map.rst +++ /dev/null @@ -1,61 +0,0 @@ -=== -Map -=== - -We implement a hierarchical sparse voxel grid for storing data. At the top level we have the ``LayerCake``, which contains several layers, each of which contains a different type of mapped quantity (eg TSDF and ESDF). A layer is a collection of sparsely allocated blocks. Each block is in charge of mapping a cubular small region of space. Most blocks are composed of many voxels, each of which captures a single value of the mapped quantity (eg the TSDF). - -.. image:: ../images/map_structure.png - :align: center - -The API for the various classes implementing a map nvblox map: - -Voxels -====== - -* :ref:`TsdfVoxel ` -* :ref:`EsdfVoxel ` -* :ref:`ColorVoxel ` - -Blocks -====== - -A template for a block containing voxels. - -* :ref:`VoxelBlock