Skip to content

Intelligent-Computing-Lab-Panda/Memba

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Memba: Membrane-Driven Parameter-Efficient Fine-Tuning for Mamba

Published at ICLR 2026 🎉 [Paper]

Overview

State Space Models (SSMs) like Mamba have emerged as powerful alternatives to Transformers, but existing PEFT methods simply transfer Transformer-tailored approaches without addressing Mamba's unique temporal processing dynamics.

Memba introduces a membrane-driven PEFT approach specifically designed for Mamba, with three key components:

  1. Leaky Integrate Membrane (LIM) — Bio-inspired gating mechanism that accumulates membrane potentials over time on Mamba's gate branch, enhancing selective information retention
  2. Strategic LoRA Placement — Low-Rank Adaptations on input and output projections
  3. Cross-Layer Membrane Transfer — Averaged membrane states propagated across layers for temporal coherence

Results

Language Tasks (Commonsense Reasoning)

Model Method #Params(%) Avg. Acc. (%)
Mamba-370M LoRAp (X) 2.67 47.8
Mamba-370M Memba (in_proj) 2.07 48.2
Mamba-790M LoRAp (X) 1.75 50.8
Mamba-790M Memba (in_proj) 1.47 51.8
Mamba-1.4B LoRAp (X) 1.36 53.7
Mamba-1.4B Memba (in_proj) 1.13 54.5

Vision Tasks (VTAB-1k with Vim-S)

Method #Params(K) Natural Specialized Structured Avg.
LoRAp (X) 1,778 76.64 83.89 60.84 71.52
LoRA (out_proj) 2,663 76.42 83.96 60.08 71.12
Hybrid (w/ proj) 117,236 77.00 84.41 61.55 72.05
Memba (in+out proj) 4,718 77.07 85.66 61.70 72.40

Getting Started

For setup, dependencies, and detailed usage instructions, please refer to the README in each task directory:

Project Structure

Memba/
├── language/
│   └── commonsense_reasoning/
│       ├── finetune.py                 # Language fine-tuning entry point
│       ├── lm_harness_eval.py          # Evaluation with lm-evaluation-harness
│       ├── requirements.txt
│       └── mamba_peft/src/peft/
│           └── tuners/mamba_peft.py    # Memba PEFT implementation (LIM + LoRA)
├── vision/
│   ├── tools/                          # Training and testing scripts
│   ├── configs/mmpretrain/vim/vtab1k/  # Experiment configs
│   └── personal_lib/
│       ├── mmpretrain/models/peft/     # Vision Memba implementation
│       └── external_packages/mamba-1p1p1/  # Modified Mamba with LIM integration

Citation

@article{lee2025memba,
  title={Memba: Membrane-driven Parameter-Efficient Fine-Tuning for Mamba},
  author={Lee, Donghyun and Li, Yuhang and Yin, Ruokai and Xiao, Shiting and Panda, Priyadarshini},
  journal={arXiv preprint arXiv:2506.18184},
  year={2025}
}

Acknowledgements

Our code builds upon Mamba, MambaPEFT, Vim, MMPreTrain, LLM-Adapters, and PEFT. Especially, we built Memba upon MambaPEFT code. Thank you for the authors!

License

This project is released under the Apache 2.0 License.

About

PyTorch Implementation of Memba: Membrane-driven Parameter-Efficient Fine-Tuning for Mamba (ICLR 2026)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors