-
Notifications
You must be signed in to change notification settings - Fork 284
Metrics reporting #1496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Metrics reporting #1496
Conversation
Signed-off-by: Jannik Steinmann <[email protected]> Rename MetricsReporter and MetricsReport Signed-off-by: Jannik Steinmann <[email protected]> Remove unnecessary stuff Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]> Be explicit about spawning
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point, the DeleteFileIndex
is only used by the scan
module. I don't think it will be needed by other modules in the future, so maybe it's a good opportunity to move it there instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. If I recall, when delete_file_index.rs
was added, the scan module didn't exist as a module and was just the scan.rs
file. If scan was already a module at this time then delete_file_index.rs
would probably have been created in there.
metrics, | ||
metadata, | ||
} => { | ||
info!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's a good idea to use debug-formatted values here. I was struggling a lot to use the tracing
API, and this is the best I could come up with so far.
I didn't really want to serialize the struct into a json, nor did I know how to implement fmt::Display
for values such that they make sense across tracing subscribers.
Any suggestions welcome!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could also use some feedback about the degree to which we want to mimic the Java implementation's log records.
@@ -90,6 +90,7 @@ typed-builder = { workspace = true } | |||
url = { workspace = true } | |||
uuid = { workspace = true } | |||
zstd = { workspace = true } | |||
tracing = { workspace = true } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far, tracing
was only used in tests. As far as I could find, the LoggingMetricsReporter
is the first use of any logging in the iceberg
crate.
I'm not entirely sure whether it's a good idea include it and commit on a specific logging crate. tracing
seems reasonably standard and compatible with other crates though. I'd also like to include some default reporter. The Java implementation comes with it's LoggingMetricsReporter.java
based on SL4J.
I've also run into some issues using the tracing
crate (as outlined in this comment) but they can probably be worked around and shouldn't be a deciding factor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the choice of tracing
vs other logging crates, you might want to read through the discussion here if you haven't already:
TL;DR: tracing
is the choice that many of us settled on as the chosen logging facade previously.
@@ -186,16 +187,25 @@ impl PlanContext { | |||
tx_data: Sender<ManifestEntryContext>, | |||
delete_file_idx: DeleteFileIndex, | |||
delete_file_tx: Sender<ManifestEntryContext>, | |||
) -> Result<Box<impl Iterator<Item = Result<ManifestFileContext>> + 'static>> { | |||
) -> Result<(Vec<Result<ManifestFileContext>>, ManifestMetrics)> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we were returning a vector and this function was only called at a single place, I took the liberty of changing the return value. This somewhat simplified passing the result to a spawned thread because Vec
implies Send + Sync
when it's Items do.
I've also extended the TODO comment below for future reference because I've added another obstacle to simply using an iterator here: the ManifestMetrics
are now continuously mutated in the loop. If we used an iterator instead, we couldn't as easily (I think) pass around the mutable reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I had to thread the metrics building throughout the planning stage, I heavily refactored this file to make room for it.
delete_file_tx, | ||
result_tx.clone(), | ||
); | ||
delete_manifests_handle.await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is worth pointing out: During refactoring, I tried to be explicit about where threads are spawned, and which JoinHandles we await on (vs. ignore). This line corresponds to
iceberg-rust/crates/iceberg/src/scan/mod.rs
Line 404 in 96ec4d5
.await; |
I left it here for consistency, but at this point I believe it is unintended. The other spawned threads are simply pushed into the background without ever checking their completion (which maybe isn't ideal either, e.g. in case they panic). This thread however is awaited on, and so IIUC we block until all delete manifest entries are processed.
crates/iceberg/src/scan/metrics.rs
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the planning stage is parallelized and multiple threads are responsible for fetching manifest files and evaluating their contents, building of the metrics reports needs to fit into this framework.
The approach that I came up with is that individual threads that process manifest entries are spawned with a neighboring thread responsible for metrics aggregation. The processing thread usually iterates over a stream (e.g. of manifest entries), and sends a metrics update to the neighboring thread. The neighboring thread then accumulates the stream of metrics updates and returns the finished result wrapped by the JoinHandle.
My previous attempt included channels for metrics updates (and no separate threads), but this resulted in coupling between the plan file processing and metrics reporting. I had to somewhere iterate over the metrics receivers to aggregate their updates, and had to do this for multiple processors sequentially. This meant that some processor could be blocked when we were still iterating over another processors metrics updates, and the metrics channel buffer was exhausted.
This approach decouples plan file processing from metrics submission.
|
||
/// Carries all metrics for a particular scan. | ||
#[derive(Debug)] | ||
pub(crate) struct ScanMetrics { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the Java implementation uses special types for the metrics (e.g. TimerResult.java
and CounterResult.java
). They include a value and a unit but I felt like both the ScanMetrics' field names and their types should convey everything we need. The RestMetricReporter will need to emit reports that follow this format but I omitted it from the general purpose ScanMetrics.
Happy for any feedback!
Signed-off-by: Jannik Steinmann <[email protected]>
Signed-off-by: Jannik Steinmann <[email protected]>
Thanks for this contribution - I can see that you've put a lot of work into this and it is appreciated! I do have some concerns with this approach though - I think that by mimicking the pre-existing Java metrics approach so closely, we miss out on a lot of the advantages that come from making use of the kind of approaches that we commonly see in modern idiomatic Rust codebases that were not as widely available when the Java Iceberg metrics subsystem was created originally. By using the Also I have another PR open that also proposes quite a large refactor to the file plan code in the |
I put something together off the back of my earlier comment and opened a draft PR. Here's the relevant commit: 2a02e55 |
This means we can share references via TableScan::column_names() Signed-off-by: Jannik Steinmann <[email protected]>
Thanks a lot for chiming in @sdd! I think using At the same time, the Metrics API seems to be part of the catalog spec and I wonder whether it shouldn't be implemented regardless. |
Sounds good on the metrics naming convention - some cross implementation standardisation would be great, especially with pyiceberg looking to use iceberg-rust in the core. Re the metrics / reporting api from the spec. I was thinking about this last night and I'm wondering if it would be feasible to implement by using either a custom tracing subscriber or metrics reporter or combination of both? That way the reports plug in to the same instrumentation as any other means of observability. I've not looked into this properly yet but it would be interesting to explore! |
I've created apache/iceberg-go#485 and apache/iceberg-python#474 (comment) for Go and Python respectively.
I see the biggest problem in the way the spec's API bundles multiple metric values into a single report. To implement this using a |
Yes you're right, I was thinking that the traces exported by tracing would be more suitable for this rather than metrics. Not sure why I even suggested metrics now 😅 |
I'm in the process of adding an integration test that showcases how to export traces via OTEL OTLP to Jaeger running in a container alongside the rest of the integration test containers. You get a trace like this: ![]() I'm getting these changes in better shape and looking to include them in a commit on my PR, #1486, soon - just thought I'd share progress :-) |
Thanks for sharing! This is great progress 👏 I also thought a bit more about retro-fitting the Metrics Reporting API. It's probably not too hard to write a custom tracing exporter that extracts span attributes for report building. But in addition to that, I would like to see independently published metrics too. From my quick research, it doesn't uniformly seem possible to derive values from spans and republish them as metrics. At least not in a way that we could control from within Iceberg. |
Which issue does this PR close?
What changes are included in this PR?
As mentioned in the issue description, this PR adds an implementation for the Iceberg Metrics Reporting API.
I'll follow up with a more thorough description of it's changes.
Are these changes tested?
Yes, with the attached unit test and an example main that's more adjacent to an integration test. The example is available on this branch.