You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I'm correct, Rabl is caching nested templates as hash objects instead of caching the final output and weaving them together. So we're incurring a stiff parsing overhead that only increases with larger data sets. For example, 10MB is taking us approximately 4 seconds for OJ to parse. So even with caching, we're looking at 4 seconds minimum to re-render them. Seems like a big change though, is it even possible?
It's a good question, rabl is caching templates as hash objects. I can see the benefit in cases of large data sets of having the cache of the final output. It is possible but will require substantial changes if I understand correctly to the way caching is implemented. Thanks for putting together those benchmarks.
Rabl::Engine looks to me to be the best place to implement this caching behavior. The difficult part is that everything is designed to work around hash objects, and it's not until the very end that the actual parsing occurs. An idea that might work is to recursively create new Engines (since the engine is caching final output correctly) for nested templates instead of letting the Builder turn them into nested hash objects.
I spent a good deal of time a couple months ago trying to figure out how to solve this exact problem. I don't remember exactly what I got held up on, but I believe the complication was with dealing how to combine JSON fragments in builder.rb with hashes or arrays in compile_hash. I also suspect that something intelligent would have to be done to support XML, BSON, PLIST, etc because combining fragment strings would be different for each format.
If I'm correct, Rabl is caching nested templates as hash objects instead of caching the final output and weaving them together. So we're incurring a stiff parsing overhead that only increases with larger data sets. For example, 10MB is taking us approximately 4 seconds for OJ to parse. So even with caching, we're looking at 4 seconds minimum to re-render them. Seems like a big change though, is it even possible?
I've built a test project to demonstrate expected behavior: https://github.com/panupan/rabl-cache-benchmarks
The text was updated successfully, but these errors were encountered: