Improve T5 encoder tests with more prompts and static context length #976
+262
−228
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The set of prompts is not big enough for statistically sound testing of
the T5 encoder. This is true for other text encoders.
With the expansion of the prompt set the bf16 numerical difference
between eager and IREE vanished. IREE is even more accurate.
In tests the tokenizer padding has been change to produce always max
length token sequence. This is in line how T5 is used int the Flux
pipeline. The T5 encoder export has been expanded with an option to
export with a static token sequence length.
The tests were refactored to share tolerance values for f32 and bf16.