-
Notifications
You must be signed in to change notification settings - Fork 0
Adaptive Testing
We are using the python library adaptivetesting
The idea is that every user has an unknown, real "ability score". The goal is to estimate this real ability score by asking questions that will give the most information in terms of the real ability score. The ability score is updated with various methods such as Maximum Likelihood Estimation...
There are also many kinds of models each with different parameters. The most common are 1PL, 2PL and 3PL.
3PL. Each question has parameters:
-
$a$ : Discrimination - Essentially how much information you get from a response. High discrimination means it is easy to tell the user's ability from this question. -
$b$ : Difficulty - A$\theta$ (ability score) such that a user with that ability score has a 50% chance of getting it right. -
$c$ : Guessing - Self-explanatory. The probability a user answers correctly based on chance alone.
1PL is just
For MacFAST we are starting off with a 1PL model. The question difficulty is given as the proportion of students who answered correctly (or incorrectly) in the past. To convert this to the correct b for the model we have.
- Item bank calibrated with IRT
- Ability score starting point (Typically 0)
- Item selection algorithm
- Scoring method
- Termination criterion
TODO if we care about this
TODO if we care about this