A curated list of "Language-Model-as-a-Service (LMaaS)" papers
(We are still completing this paper list. A detailed introduction to the LMaaS will be added soon.)
The paper list is mainly maintained by Tianxiang Sun. We strongly encourage the NLP researchers who are interested in this topic to make pull request to add or update the papers! (See Contributing)
We follow the same keywords convention as PromptPapers.
The main experimental setting of the work.
-
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks. RepL4NLP@ACL 2019
Matthew E. Peters, Sebastian Ruder, Noah A. Smith. [pdf]
-
Language Models as Knowledge Bases? EMNLP 2019
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel. [pdf] [code]
-
How Can We Know What Language Models Know? TACL 2020
Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig. [pdf] [code]
-
Language Models are Few-Shot Learners. NeurIPS 2020
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei. [pdf]
-
Calibrate Before Use: Improving Few-Shot Performance of Language Models. ICML 2021
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh. [pdf] [code]
-
Multitask Prompted Training Enables Zero-Shot Task Generalization. ICLR 2022
Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, Alexander M. Rush. [pdf] [code]
-
Learning To Retrieve Prompts for In-Context Learning. NAACL-HLT 2022
-
Black-Box Tuning for Language-Model-as-a-Service. ICML 2022
Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, Xipeng Qiu. [pdf] [code]
-
Co-training Improves Prompt-based Learning for Large Language Models. ICML 2022
Hunter Lang, Monica Agrawal, Yoon Kim, David Sontag. [pdf]
-
Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning. Preprint 2022.2
Yitao Liu, Chenxin An, Xipeng Qiu. [pdf]
-
Black-box Prompt Learning for Pre-trained Language Models. Preprint 2022.1
Shizhe Diao, Xuechun Li, Yong Lin, Zhichao Huang, Tong Zhang. [pdf]
-
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models. Preprint 2022.3
Archiki Prasad, Peter Hase, Xiang Zhou, Mohit Bansal. [pdf] [code]
-
In-Context Learning for Few-Shot Dialogue State Tracking. Preprint 2022.3
Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf [pdf] [code]
-
RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning. Preprint 2022.5
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, Zhiting Hu. [pdf] [code]
-
BBTv2: Pure Black-Box Optimization Can Be Comparable to Gradient Descent for Few-Shot Learning. Preprint 2022.5
Tianxiang Sun, Zhengfu He, Hong Qian, Xuanjing Huang, Xipeng Qiu. [pdf] [code]
-
Few-shot Prompting Towards Controllable Response Generation. Preprint 2022.6
Hsuan Su, Pohan Chi, Shih-Cheng Huang, Chung Ho Lam, Saurav Sahay, Shang-Tse Chen, Hung-yi Lee. [pdf]
👍🎉 First off, thanks for taking the time to contribute! 🎉👍
Steps to contribute:
- Add a new paper or update an existing paper.
- Please use the same format as existing entries. When adding keywords tags, please follow the same keywords convention. When adding the pdf link of the paper, please use the abstract page if it is on arXiv.
- Modify the
PaperNumber
on the top of the page accordingly and submit your pull request. We recommend giving a very brief explanation why you think a paper should be added or changed.
We thank all the contributors for paper recommendation!