Skip to content
/ P2M Public

[RA-L'25] A Simple LiDAR-centric End-to-end Navigation Framework in Dynamic Environments

License

Notifications You must be signed in to change notification settings

arclab-hku/P2M

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Important

๐ŸŒŸ Stay up to date at arclab.hku.hk!

๐Ÿฆพ P2M

๐Ÿ“„ Paper | ๐ŸŽฅ Demo | ๐Ÿš€ Project Page (Coming Soon)

โœ’๏ธ Bowen Xu, Zexuan Yan, M. Lu, X. Fan, Y. Luo, Y. Lin, Z. Chen, Y. Chen, Q. Qiao, P. Lu
๐Ÿ“ง Primary Contact: Bowen Xu (link.bowenxu@connect.hku.hk) and Zexuan Yan (ryan2002@connect.hku.hk)

๐Ÿ”ฅ Highlights

  • An efficient LiDAR representation combining the depth and environment change sensing.
  • A generalizable RL training for dynamic obstacle avoidance.
  • A lightweight end-to-end autonomous flight system from point to motion.

Table of Contents

๐ŸŽฅ Demo

Real-world robot experiments (1x speed)
forest.mp4
pedestrian.mp4

๐Ÿ“ข News

  • [2025/12] The code of P2M v1.0 is released. Please check it out!

๐Ÿค— Model Zoo

Model Name Backbone Path Note
p2m-deafult CNN+MLP Release/p2m_model/p2m_default.pt The p2m model trained from scratch.

๐ŸŽฎ Getting Started

0๏ธโƒฃ We use conda to manage the environment

# Create the conda environment
conda create -n p2m python=3.10
conda activate p2m
cp -r conda_setup/etc $CONDA_PREFIX

# Activate the conda environment
conda activate p2m
conda config --add channels conda-forge
conda config --set channel_priority strict

1๏ธโƒฃ Install Isaac Sim and Isaac Lab

Note

Please refer to this doc for an example of how to install isaac sim and isaac lab

2๏ธโƒฃ Install dependencies

# Install the project in editable mode at the project root
pip install --upgrade pip
pip install -e .

# Install additional dependencies
pip install setproctitle huggingface_hub
pip install usd-core==23.11 lxml==4.9.4 tqdm xxhash
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
pip install tensordict==0.3.2 --no-deps 

3๏ธโƒฃ Build the repository

catkin build

4๏ธโƒฃ Download the pretrained models

  • p2m_default.pt: The pretrained p2m policy model, for testing only. Put it into ./models.
  • neuflow_mixed.pth: The pretrained flow estimation model, for training and testing. Put it into ./resources/NeuFlow_v2.

๐Ÿ”ฅ Training Recipe

0๏ธโƒฃ Train the P2M policy

cd scripts
python train.py

1๏ธโƒฃ Adjust the training parameters

Fill in your wandb infomation in ./scripts/config/train.yaml.

wandb:
  entity: # your worksapce name
  project: # your project name

Note

More parameters can be adjusted in ./scripts/config/train.yaml (training) and ./cfg/task/train_env.yaml (environment).

๐Ÿš€ Testing Guide

0๏ธโƒฃ Test the P2M policy

# Terminal 1
source devel/setup.bash
roslaunch map_generator sim_test.launch
# Terminal 2
source devel/setup.bash
roslaunch lidar scanner.launch
# Terminal 3
conda activate p2m
cd scripts
python infer.py

Use the 2D Nav Goal to trigger the flight!

1๏ธโƒฃ Adjust the testing environment

To change the obstacle number and density in ./src/uav_simulator/map_generator/launch/sim_test.launch

<!-- static obstacle number -->
<param name="map/obs_num" value=""/>
<!-- dynamic obstacle number --> 
<param name="map/moving_obs_num" value=""/>
Testing in different obstacle densities (1x speed)
low_density.mp4
high_density.mp4

Note

More parameters can be adjusted in ./scripts/infer.py (checkpoint and goal) and ./src/uav_simulator/map_generator/launch/sim_test.launch (environment).

๐Ÿ“ Citation

If you find our code or models useful in your work, please cite our paper:

@ARTICLE{xu2025flow,
  title={Flow-Aided Flight Through Dynamic Clutters From Point to Motion},
  author={Xu, Bowen and Yan, Zexuan and Lu, Minghao and Fan, Xiyu and Luo, Yi and Lin, Youshen and Chen, Zhiqiang and Chen, Yeke and Qiao, Qiyuan and Lu, Peng},
  journal={IEEE Robotics and Automation Letters}, 
  year={2025},
  publisher={IEEE}
}

๐Ÿค“ Acknowledgments

We would like to express our gratitude to the following projects, which have provided significant support and inspiration for our work:

  • OmniDrones: Underlying training toolchain for reinforcement learning on multicopters.
  • NeuFlow v2: Robust and efficient neural optical flow estimation on edge devices.

About

[RA-L'25] A Simple LiDAR-centric End-to-end Navigation Framework in Dynamic Environments

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •