Skip to content

wileewang/TransPixar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TransPixeler: Advancing Text-to-Video Generation with Transparency



Luozhou Wang*, Yijun Li**, Zhifei Chen, Jui-Hsien Wang, Zhifei Zhang, He Zhang, Zhe Lin, Yingcong Chen†

HKUST(GZ), HKUST, Adobe Research.

* Internship Project. ** Project Leader. † Corresponding Author.

Text-to-video generative models have made significant strides, enabling diverse applications in entertainment, advertising, and education. However, generating RGBA video, which includes alpha channels for transparency, remains a challenge due to limited datasets and the difficulty of adapting existing models. Alpha channels are crucial for visual effects (VFX), allowing transparent elements like smoke and reflections to blend seamlessly into scenes. We introduce TransPixar, a method to extend pretrained video models for RGBA generation while retaining the original RGB capabilities. TransPixar leverages a diffusion transformer (DiT) architecture, incorporating alpha-specific tokens and using LoRA-based fine-tuning to jointly generate RGB and alpha channels with high consistency. By optimizing attention mechanisms, TransPixeler preserves the strengths of the original RGB model and achieves strong alignment between RGB and alpha channels despite limited training data. Our approach effectively generates diverse and consistent RGBA videos, advancing the possibilities for VFX and interactive content creation.

📰 News

  • [2025.01.19] We've renamed our project from TransPixar to TransPixeler!!
  • [2025.01.17] We’ve created a Discord group and a WeChat group! Everyone is welcome to join for discussions and collaborations. Let’s work together to make the repository even better!
  • [2025.01.14] Our repository has been receiving significant attention recently, and we’re thrilled by the interest in TransPixar! Many users have requested deployments on new video models, including Hunyuan and LTX, as well as support for ComfyUI. We’ve added these to our to-do list and are eager to make progress. However, training TransPixar LoRA for different video models requires substantial resources and time, so we kindly ask for your patience. Stay tuned for updates! Additionally, we warmly welcome contributions to this repository—your support makes a difference!
  • [2025.01.07] We have released the project page, arXiv paper, inference code and huggingface demo for TransPixar + CogVideoX-5B.

🚧 Todo List

  • Release code, paper and demo.
  • Release checkpoints of joint generation (RGB + Alpha).
  • Release checkpoints for Mochi and CogVideoX-I2V
  • Provide support for ComfyUI
  • Deploy TransPixar on Hunyuan and LTX video models

Contents

Installation

conda create -n TransPixar python=3.10
conda activate TransPixar
pip install -r requirements.txt

TransPixeler LoRA Hub

Our pipeline is designed to support various video tasks, including Text-to-RGBA Video, Image-to-RGBA Video.

We provide the following pre-trained LoRA weights for different tasks:

Task Base Model Frames LoRA weights Inference VRAM
T2V + RGBA genmo/mochi-1-preview 37 Coming soon TBD
T2V + RGBA THUDM/CogVideoX-5B 49 link ~24GB
I2V + RGBA THUDM/CogVideoX-5b-I2V 49 Coming soon TBD

Training - RGB + Alpha Joint Generation

We have open-sourced the training code for Mochi on RGBA joint generation. Please refer to the Mochi README for details.

Inference - Gradio Demo

In addition to the Hugging Face online demo, users can also launch a local inference demo based on CogVideoX-5B by running the following command:

python app.py

Inference - Command Line Interface (CLI)

To generate RGBA videos, navigate to the corresponding directory for the video model and execute the following command:

python cli.py \
    --lora_path /path/to/lora \
    --prompt "..." \

Acknowledgement

  • finetrainers: We followed their implementation of Mochi training and inference.
  • CogVideoX: We followed their implementation of CogVideoX training and inference.

We are grateful for their exceptional work and generous contribution to the open-source community.

Citation

@misc{wang2025transpixar,
     title={TransPixar: Advancing Text-to-Video Generation with Transparency}, 
     author={Luozhou Wang and Yijun Li and Zhifei Chen and Jui-Hsien Wang and Zhifei Zhang and He Zhang and Zhe Lin and Yingcong Chen},
     year={2025},
     eprint={2501.03006},
     archivePrefix={arXiv},
     primaryClass={cs.CV},
     url={https://arxiv.org/abs/2501.03006}, 
}

Star History

Star History Chart

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published