LaVi Lab
- 32 followers
- Hong Kong
Popular repositories Loading
-
Video-3D-LLM
Video-3D-LLM Public[CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.
-
NaviLLM
NaviLLM PublicForked from zd11024/NaviLLM
[CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'
-
Visual-Table
Visual-Table Public[EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"
Repositories
- LaVi-Lab.github.io Public
LaVi-Lab/LaVi-Lab.github.io’s past year of commit activity - AIM Public
[ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"
LaVi-Lab/AIM’s past year of commit activity - Video-3D-LLM Public
[CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.
LaVi-Lab/Video-3D-LLM’s past year of commit activity - Visual-Table Public
[EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"
LaVi-Lab/Visual-Table’s past year of commit activity - TG-Vid Public
[EMNLP 2024] Official code for "Enhancing Temporal Modeling of Video LLMs via Time Gating"
LaVi-Lab/TG-Vid’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.