Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extra logging on benchmarking? #176

Open
bwaybandit opened this issue Mar 17, 2016 · 3 comments
Open

extra logging on benchmarking? #176

bwaybandit opened this issue Mar 17, 2016 · 3 comments

Comments

@bwaybandit
Copy link

Would like to see what response pyresttest is getting to see what it is counting as a failure?
Part of the yaml is below. Trying to track down why a small percentage of users do not get created.

Tried --log=DEBUG in this case and all i get is:
INFO:Benchmark Starting: Create contact Group: Default
INFO:Warmup: started
INFO:Warmup: finished
INFO:Benchmark: starting
INFO:Benchmark: ending

  • benchmark:
    • name: "Create contact"
    • url: "/api/v1/contacts/"
    • warmup_runs: 0
    • method: "POST"
    • headers: {'Content-Type': 'application/json'}
    • body: "{}"
    • benchmark_runs: '10000'
    • output_format: csv
    • metrics:
      • total_time: total
      • total_time: mean

FYI; great tool.

@bwaybandit
Copy link
Author

Added logging in run_benchmark method of resttest.py. That works. Was wondering if there was something else i was missing?

@svanoort
Copy link
Owner

@bwaybandit You might be the only person besides me (and a team I was affiliated with at a previous company) that uses the benchmark feature! (Though I have plans to make it easier to use -- see #76). I would be curious to see how useful others find it.

Currently the only failure cases are when a PyCurl exception is thrown, for example when a connection cannot be opened to the host, or one is unable to resolve the host. There is a full list of errors here.

By design, the initial benchmark implementation doesn't run validators or extractors at all, nor check http response codes. This is because it is running a much-simplified execution loop, so it can quickly cycle: https://github.com/svanoort/pyresttest/blob/master/pyresttest/resttest.py#L495

Yes, extra logging would be a good thing to add -- I'm doing a complete refactor of the way execution and logging are structured in #171 (issue with the next steps here: #170) and it would be fairly straightforward to add error logging there. Benchmarks will probably get a more complete execution lifecycle too.

By all means, fork away and add customizations for your use case in the meantime... but keep an eye on the main, there's big changes coming in v1.8, 1.9, and 2.0! PRs are, of course always welcome where something isn't specific to your use case. If it would collide with refactorings, it would be helpful to base off those branches though.

@bwaybandit
Copy link
Author

Thanks Sam. Second day with pyresttest and looking forward to doing a lot more with it. Thanks for all the info. Very useful...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants