-
Notifications
You must be signed in to change notification settings - Fork 896
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate HttpClientTest test server to Armeria #3225
Conversation
…nstrumentation into armeria-test-server
…nstrumentation into armeria-test-server
instrumentation/akka-http-10.0/javaagent/akka-http-10.0-javaagent.gradle
Outdated
Show resolved
Hide resolved
instrumentation/akka-http-10.0/javaagent/akka-http-10.0-javaagent.gradle
Show resolved
Hide resolved
instrumentation/java-http-client/javaagent/src/test/groovy/JdkHttpClientTest.groovy
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What reactor-netty? :) I see other failed tests: https://scans.gradle.com/s/niu4tnujvnxsu/tests
instrumentation/akka-http-10.0/javaagent/akka-http-10.0-javaagent.gradle
Outdated
Show resolved
Hide resolved
instrumentation/java-http-client/javaagent/src/test/groovy/JdkHttpClientTest.groovy
Show resolved
Hide resolved
@iNikem Ah looks like a difference of localhost vs 127.0.0.1 caused by the execution environment difference, will tweak that. Still would help if you could try this branch's reactor-netty tests since those seem like real breakage (and reactor netty instrumentation is extremely hard to grok :( |
Still don't see failing reactor-netty tests :D https://scans.gradle.com/s/iw4tlx733ld2g I see just netty failure |
@iNikem Hmm, test (8) is a testcontainers random flake, build looks like a flakiness of some sort though will see if there is extra flakiness because of this PR. But test (15) shows it (as does running locally every time :P) https://scans.gradle.com/s/5naapz3zi7hrg I think it's because the client span remains in the channel through the tests and couldn't figure out where that's supposed to get cleared out. |
."${method.toLowerCase()}"() | ||
.uri(uri.toString()) | ||
} | ||
|
||
@Override | ||
int sendRequest(HttpClient.ResponseReceiver request, String method, URI uri, Map<String, String> headers) { | ||
return request.response().block().status().code() | ||
return request.responseSingle {resp, content -> | ||
// Make sure to consume content since that's when we close the span. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This monstrosity is the only way I found to have reactor-netty present us a status code after consuming the response. Presumably Armeria is flushing bytes to the connection more eagerly, and that allows reactor-netty to process the status line even before the content, meaning spans weren't completing in time.
Looks like issues were related to Armeria flushing headers before content, causing tests to also see headers before content was consumed. Fixed the tests to consume explicitly |
* Use Armeria for test HTTP server. * Continue * Migrate test http server to Armeria. * Finish * Use junit extension * Remove unused. * Use localhost for net peer name. * Block for full response in recator-netty tests. * Handle split responses in netty41 and akka * Typo
My main motivation for this is to follow up with migrating the HTTPS test from an external request to internal to work offline, and also add HTTP/2 tests for libraries that support it. I was sad when I found that Netty instrumentation tests wouldn't work without introducing shading, but in the end I think it's good given we've found some issues with Jetty conflicts between instrumentation and testing in #3079 too, I guess it's best practice to shade the server we use for tests.
If this PR is merged, I will follow up with removing remaining usage of
TestHttpServer
and delete it.The build won't pass though, reactor-netty fails mysteriously - as far as I can tell the connection context is not getting cleaned up so only the first test passes, and all other client spans get suppressed after that. Armeria presumably has triggered some connection reuse behavior that wasn't there before, my attempt to try to prevent it with
maxRequestsPerCnonection(1)
didn't help. But I guess the instrumentation has some bug - @iNikem could you help troubleshoot?