Skip to content

Take into account the coordinated omission problem and JVM warmup phase #36

@maseev

Description

@maseev

If I'm not mistaken, wrk is subjected to the coordinated omission problem. Because of that, wrk most likely skews the results about real application latency.

Fortunately, there's a patched version of this tool, called wrk2. It would be great if you could replace wrk with wrk2 in your tests and see how that affects the results.

There's also another load generator that might come in handy - tcpkali. Although, I'm not entirely sure it doesn't have the same problem that wrk does.

Also, If I understand correctly, you skip the usual warmup phase for the JVM. In other words, you start the server and then start sending requests and collecting statistics about throughput and latency right away.

By default, it takes JVM 10,000 interpreted method invocations before JIT kicks in and the interpreted code gets compiled into machine code along with a long list of different optimizations like method inlining, dead code elimination, etc.

I also noticed that you have an 'Avg Latency' column in the main table. I don't really think you can use the average for latency because it doesn't really mean anything. Here's a great presentation by Gil Tene where he talks about how to not measure latency.

In short - it would be much better if you could share not the average latency numbers but, let's say the median 50% percentile and 99.9999% percentile (with four nines after the dot).

Looks like this ticket is related.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions