Skip to content

Is this the right tool for stroke rendering? And how does hardware acceleration work. #416

@KySpace

Description

@KySpace

Hi! I hope this will not end up irrelevant.

Motivation

I'm interested in building a stylus-whiteboard toy app that renders strokes based on the sampled points with their pressure, position and stylus angle etc.. I hope to better utilize these properties to simulate various kinds of pens and brushes and record them effectively in a vectorized way (as opposed to rasterized brushes in painting apps or simple strokes in whiteboard apps). I once assumed that I could take these sampled points as vertices and pass them to GPU to let the fragment shaders determine their boundaries, but then this doesn't seem possible since GPUs like triangles, so I've heard, see discussion

I realized that what I really want is to (for every frame) plot pixel by pixel over the entire screen, given a selection of strokes with their sampled points and I just assumed that doing it in parallel on GPU will be faster, but maybe doing it on CPU is necessary and will be more versatile (Correct me if I'm wrong).

I've read the introduction and some issues here. It seems that this crate is the right tool? Currently, I don't have much concept on what would be the performance bottleneck. I guess I can try it out but I'd appreciate if someone point out that I'm off track earlier.

Scenario model, with questions

Say, the screen I'm looking at right now has ~6 million pixels, and there are 2000 characters displayed, and I can scroll and zoom in pretty smoothly and the rendering is sharp. From what I learned, these glyphs are eventually turned into little triangles to be rendered on GPU.

  • What if, instead, I do it with pixels, and draw on each of these 6 million pixels, according to the neighboring glyph/stroke, ignoring the tessellated triangles (provided that I have a method to tell which glyphs or strokes are relevant). Would the performance be on-par with the GPU? I can argue that I saved resources making triangles and use less points to record a glyph, if I do it smartly.
    • I see some demos for pixels have pixels larger than screen pixel, but I do want to render to screen pixels most of the time. Will it be slow if I render at screen resolution/size? Can I align them easily? Will they be sharp?
  • What does it mean that pixels is hardware accelerated? Does it mean that GPU is used eventually? But is pixels still doing tessellation into triangles? Or does it use some other GPU features that draw pixels efficiently that I don't know? Compared to CPU rendering, what makes it faster in the end?
    • If writing to pixels is on the CPU side, is some parallelism involved
  • Using pixels, am I to write the pixel rendering functions in rust or wgsl? Am I storing the sampled points of strokes in memory or GPU buffer? Or both?
  • Are the pixel values stored on memory or GPU Buffer? Or either way?
  • I would like to render it on a HTMLCanvas. Is it possible/easy to do, like choosing the wgpu's SurfaceTarget?
  • While I render the pixels pixels to the canvas, can I also render other things on top/below it using wgpu?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions