The is the released codes for "Single-view 3D Mesh Reconstruction for Seen and Unseen Categories". Published in IEEE Transactions on Image Processing (TIP).
- Upload the data processing codes.
- Upload dataset and update the download link.
- Clean the codes.
The codes have been tested on the linux Ubuntu 20.04 system with NVIDIA RTX3090ti. The enviroment inlcude:
- Python=3.9
- Pytorch=1.12.1
- Torchvision=0.13.1
- CUDA=11.6
Please clone the repository and navigate into it in your terminal, its location is assumed for all subsequent commands.
The environment.yaml
file contains all necessary python dependencies for the project. You can create the anaconda environment using:
conda env create -f environment.yaml
conda activate GenMesh
There are other dependencies to be installed.
- To train and test the model, libaries for chamfer distance,
- To test the model, earth mover distance are needed.
The commands have been incorperated by create_env.sh
. You can install them via runing the script:
# install pointnet++
sudo apt-get install ninja-build
pip install .
# install Chamfer Distance
pip install git+'https://github.com/otaheri/chamfer_distance'
# install Earth Mover Distance
cd PyTorchEMD
python setup.py install
Or you can install step by step by yourself.
Dowanload the ShapeNet dataset and corresponding renderings in the 3D-R2N2. We provide our renderings and split file here. Run the preprocess scipt in the fold preprossing/
to normalize the mesh. We also provide the processed files here.
To visualize the generated meshes, make sure you lauch the visdom server before trianing.
python -m visdom.server
You can run it in the background mode or on a sperate wiwindow. Or if you don't need it, comment the visulization lines in the codes. To train your NVF, you can change the parameters in the configs and run:
bash train.sh
In the log/
folder you can find an experiment folder containing the model checkpoints. You can monitor the training and visulization using visdom with link http://localhost:8077.
Please specify the desired model before running.
To test results after generation:
python test.py
Please specify the desired experiment name before running.
For questions and comments please leave your questions in the issue or contact Xianghui Yang via email [email protected].
Part of the renderings (13 catergories over 16) are downloaded from 3D-R2N2. Thanks for their releasing.
The point encoder is Pointnet++ implemented by Michael Danielczuk. Thanks for his implementation.
If you find our work useful, please cite our paper. 😄
@ARTICLE{10138738,
author={Yang, Xianghui and Lin, Guosheng and Zhou, Luping},
journal={IEEE Transactions on Image Processing},
title={Single-view 3D Mesh Reconstruction for Seen and Unseen Categories},
year={2023},
volume={},
number={},
pages={1-1},
doi={10.1109/TIP.2023.3279661}}