Skip to content

Code for the paper "Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models"

Notifications You must be signed in to change notification settings

ChantalMP/RewardingDoubt

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models

Authors: David Bani-Harouni*, Chantal Pellegrini*, Paul Stangel, Ege Özsoy, Matthias Keicher, Nassir Navab

* shared first authorship

teaser

A safe and trustworthy use of Large Language Models (LLMs) requires an accurate expression of confidence in their answers. We propose a novel Reinforcement Learning approach that allows to directly fine-tune LLMs to express calibrated confidence estimates alongside their answers to factual questions. Our method optimizes a reward based on the logarithmic scoring rule, explicitly penalizing both over- and under-confidence. This encourages the model to align its confidence estimates with the actual predictive accuracy. The optimal policy under our reward design would result in perfectly calibrated confidence expressions. Unlike prior approaches that decouple confidence estimation from response generation, our method integrates confidence calibration seamlessly into the generative process of the LLM. Empirically, we demonstrate that models trained with our approach exhibit substantially improved calibration and generalize to unseen tasks without further fine-tuning, suggesting the emergence of general confidence awareness. We provide our training and evaluation code in the supplementary and will make it publicly available upon acceptance.

Usage descriptions are included in the single and multiple answer folders.

About

Code for the paper "Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.4%
  • Jupyter Notebook 3.6%