Important
๐ Stay up to date at arclab.hku.hk!
โ๏ธ Bowen Xu, Zexuan Yan, M. Lu, X. Fan, Y. Luo, Y. Lin, Z. Chen, Y. Chen, Q. Qiao, P. Lu
๐ง Primary Contact: Bowen Xu (link.bowenxu@connect.hku.hk) and Zexuan Yan (ryan2002@connect.hku.hk)
- An efficient LiDAR representation combining the depth and environment change sensing.
- A generalizable RL training for dynamic obstacle avoidance.
- A lightweight end-to-end autonomous flight system from point to motion.
- ๐ฅ Demo
- ๐ข News
- ๐ค Model Zoo
- ๐ฎ Getting Started
- ๐ฅ Training Recipe
- ๐ Testing Guide
- ๐ Citation
- ๐ค Acknowledgments
| Real-world robot experiments (1x speed) | |
forest.mp4 |
pedestrian.mp4 |
- [2025/12] The code of P2M v1.0 is released. Please check it out!
| Model Name | Backbone | Path | Note |
|---|---|---|---|
| p2m-deafult | CNN+MLP | Release/p2m_model/p2m_default.pt | The p2m model trained from scratch. |
# Create the conda environment
conda create -n p2m python=3.10
conda activate p2m
cp -r conda_setup/etc $CONDA_PREFIX
# Activate the conda environment
conda activate p2m
conda config --add channels conda-forge
conda config --set channel_priority strictNote
Please refer to this doc for an example of how to install isaac sim and isaac lab
# Install the project in editable mode at the project root
pip install --upgrade pip
pip install -e .
# Install additional dependencies
pip install setproctitle huggingface_hub
pip install usd-core==23.11 lxml==4.9.4 tqdm xxhash
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
pip install tensordict==0.3.2 --no-deps catkin build- p2m_default.pt: The pretrained p2m policy model, for testing only. Put it into
./models. - neuflow_mixed.pth: The pretrained flow estimation model, for training and testing. Put it into
./resources/NeuFlow_v2.
cd scripts
python train.pyFill in your wandb infomation in ./scripts/config/train.yaml.
wandb:
entity: # your worksapce name
project: # your project nameNote
More parameters can be adjusted in ./scripts/config/train.yaml (training) and ./cfg/task/train_env.yaml (environment).
# Terminal 1
source devel/setup.bash
roslaunch map_generator sim_test.launch# Terminal 2
source devel/setup.bash
roslaunch lidar scanner.launch# Terminal 3
conda activate p2m
cd scripts
python infer.pyUse the 2D Nav Goal to trigger the flight!
To change the obstacle number and density in ./src/uav_simulator/map_generator/launch/sim_test.launch
<!-- static obstacle number -->
<param name="map/obs_num" value=""/>
<!-- dynamic obstacle number -->
<param name="map/moving_obs_num" value=""/>| Testing in different obstacle densities (1x speed) | |
low_density.mp4 |
high_density.mp4 |
Note
More parameters can be adjusted in ./scripts/infer.py (checkpoint and goal) and ./src/uav_simulator/map_generator/launch/sim_test.launch (environment).
If you find our code or models useful in your work, please cite our paper:
@ARTICLE{xu2025flow,
title={Flow-Aided Flight Through Dynamic Clutters From Point to Motion},
author={Xu, Bowen and Yan, Zexuan and Lu, Minghao and Fan, Xiyu and Luo, Yi and Lin, Youshen and Chen, Zhiqiang and Chen, Yeke and Qiao, Qiyuan and Lu, Peng},
journal={IEEE Robotics and Automation Letters},
year={2025},
publisher={IEEE}
}
We would like to express our gratitude to the following projects, which have provided significant support and inspiration for our work:
- OmniDrones: Underlying training toolchain for reinforcement learning on multicopters.
- NeuFlow v2: Robust and efficient neural optical flow estimation on edge devices.
