Hi! First, thank you for building Pretext. The library’s approach to avoiding DOM measurement and reflow is very compelling.
I’m evaluating it for a high-frequency text UI use case, and I think there is one missing capability that would make it much more useful for streaming applications:
Problem
Pretext is excellent when text is relatively stable:
prepare(text, font) does the heavy work once
layout(prepared, width, lineHeight) stays very cheap
That works great for resize-driven relayout.
However, in streaming UIs, the text itself keeps changing:
chat messages that stream token by token
terminal / log panes that append text continuously
live transcript / live notes views
collaborative text surfaces where content grows incrementally
In these cases, the current model seems to require:
prepare(newText, font) again for every append/update
That is still much better than DOM measurement, but it means the whole text has to be re-analyzed and re-measured repeatedly, even when only a small suffix changed.
What would help
An incremental / append-only API.
Something along the lines of:
const prepared = prepareStream(initialText, font, options)
appendPreparedText(prepared, appendedText)
const result = layout(prepared, width, lineHeight)
Or any equivalent design that allows the prepared state to grow without rebuilding from scratch.
Desired behavior
The ideal version would:
reuse existing segmentation / measurement work for the unchanged prefix
only analyze and measure the appended suffix
keep existing caches valid
preserve line-breaking correctness
still work with layout(), layoutWithLines(), measureLineStats(), or line-range walkers
support both plain text and pre-wrap
remain accurate across multilingual text, emoji, soft hyphens, and mixed scripts
Why this matters
There are two separate performance problems in text-heavy UIs:
DOM measurement / reflow pressure
repeated full recomputation while content is still streaming
Pretext already addresses the first one very well.
An incremental prepare/append capability would help with the second one too, especially for:
long streaming chat messages
many independently updating text surfaces
virtualized chat/log interfaces where text height still needs to be known ahead of DOM rendering
Non-goals
I’m not asking for mutable rich editing, arbitrary middle-of-text edits, or a full text editor model.
Even a limited first version would already be useful if it only supports:
append-only updates
same font / same options
no modifications to the already-prepared prefix
Question
Would you consider an incremental / append-only prepared-text API?
Even if the exact API above is not the right shape, I’d love to know whether this is aligned with the project direction or explicitly out of scope.
Thanks again.
Hi! First, thank you for building Pretext. The library’s approach to avoiding DOM measurement and reflow is very compelling.
I’m evaluating it for a high-frequency text UI use case, and I think there is one missing capability that would make it much more useful for streaming applications:
Problem
Pretext is excellent when text is relatively stable:
prepare(text, font) does the heavy work once
layout(prepared, width, lineHeight) stays very cheap
That works great for resize-driven relayout.
However, in streaming UIs, the text itself keeps changing:
chat messages that stream token by token
terminal / log panes that append text continuously
live transcript / live notes views
collaborative text surfaces where content grows incrementally
In these cases, the current model seems to require:
prepare(newText, font) again for every append/update
That is still much better than DOM measurement, but it means the whole text has to be re-analyzed and re-measured repeatedly, even when only a small suffix changed.
What would help
An incremental / append-only API.
Something along the lines of:
const prepared = prepareStream(initialText, font, options)
appendPreparedText(prepared, appendedText)
const result = layout(prepared, width, lineHeight)
Or any equivalent design that allows the prepared state to grow without rebuilding from scratch.
Desired behavior
The ideal version would:
reuse existing segmentation / measurement work for the unchanged prefix
only analyze and measure the appended suffix
keep existing caches valid
preserve line-breaking correctness
still work with layout(), layoutWithLines(), measureLineStats(), or line-range walkers
support both plain text and pre-wrap
remain accurate across multilingual text, emoji, soft hyphens, and mixed scripts
Why this matters
There are two separate performance problems in text-heavy UIs:
DOM measurement / reflow pressure
repeated full recomputation while content is still streaming
Pretext already addresses the first one very well.
An incremental prepare/append capability would help with the second one too, especially for:
long streaming chat messages
many independently updating text surfaces
virtualized chat/log interfaces where text height still needs to be known ahead of DOM rendering
Non-goals
I’m not asking for mutable rich editing, arbitrary middle-of-text edits, or a full text editor model.
Even a limited first version would already be useful if it only supports:
append-only updates
same font / same options
no modifications to the already-prepared prefix
Question
Would you consider an incremental / append-only prepared-text API?
Even if the exact API above is not the right shape, I’d love to know whether this is aligned with the project direction or explicitly out of scope.
Thanks again.