-
University of Amsterdam
- Amsterdam, the Netherlands
- https://chuanmeng.github.io/
- @ChuanMg
Stars
Research on Learned Sparse Retrieval Derived from Transformers-based Ranking for Information Retrieval
ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)
[Official Codes] Synthetic Test Collections for Retrieval Evaluation (SIGIR 2024)
Codes and packages for the paper titled Evaluating Retrieval Quality in Retrieval-Augmented Generation.
[SIGIR 2024] The official repo for paper "Planning Ahead in Generative Retrieval: Guiding Autoregressive Generation through Simultaneous Decoding"
Source Code for "Incorporating Retrieval Information into the Truncation of Ranking Lists for Better Legal Search", SIGIR22
This is the official code for the EMNLP 2023 paper "GLEN: Generative Retrieval via Lexical Index Learning".
[WWW 2024] The official repo for paper "Scalable and Effective Generative Information Retrieval".
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Aspect Based ReDial(AB-ReDial): Is a subset data from ReDial annotated on six dialogue aspects and overall user satisfaction at the turn and dialogue levels with the following aspects; relevance, i…
Zero-shot Document Ranking with Large Language Models.
RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.
Tevatron - A flexible toolkit for neural retrieval research and development.
Open-source Large Language Models are Strong Zero-shot Query Likelihood Models for Document Ranking
a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini
arian-askari / DSI-QG
Forked from ArvinZhuang/DSI-QGThe official repository for "Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation", Shengyao Zhuang, Houxing Ren, Linjun Shou, Jian Pei, Ming Gong, …
pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.
Injecting First-stage Retriever score into the input of cross-encoder re-rankers in a knowledge distillation training setup
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Instruct-tune LLaMA on consumer hardware
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
Code and documentation to train Stanford's Alpaca models, and generate the data.
BARTScore: Evaluating Generated Text as Text Generation
A modular RL library to fine-tune language models to human preferences