-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement async export for Databricks trace export and make it default #15163
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: B-Step62 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements asynchronous Databricks trace export as the default behavior, with supporting changes in environment configuration, export queue implementation, and unit tests.
- Updated tests to validate both async and sync export behavior.
- Created a new AsyncTraceExportQueue with configurable parameters for queue size and worker count.
- Updated REST utilities and Databricks exporter to support retries with a configurable timeout.
Reviewed Changes
Copilot reviewed 8 out of 9 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
tests/tracing/export/test_mlflow_exporter.py | Removed async queue tests to align with the new async implementation. |
tests/tracing/export/test_databricks_exporter.py | Added async/sync parametrization and environment variable setups. |
tests/tracing/export/test_async_export_queue.py | Introduced tests for the new AsyncTraceExportQueue implementation. |
mlflow/utils/rest_utils.py | Added support for a retry_timeout_seconds parameter in HTTP requests. |
mlflow/tracing/export/mlflow.py | Updated to initialize the async export queue without passing a client. |
mlflow/tracing/export/databricks.py | Configured the exporter to default to async mode with a retry timeout. |
mlflow/tracing/export/async_export_queue.py | New async queue implementation using a bounded queue and worker pool. |
mlflow/environment_variables.py | Added new environment variables for async logging configurations. |
Files not reviewed (1)
- docs/docs/tracing/api/how-to.mdx: Language not supported
Comments suppressed due to low confidence (2)
tests/tracing/export/test_databricks_exporter.py:170
- The environment variable 'MLFLOW_ASYNC_TRACE_LOGGING_RETRY_TIMEOUT' is set twice consecutively. Removing the duplicate line will improve clarity and avoid potential confusion.
monkeypatch.setenv("MLFLOW_ASYNC_TRACE_LOGGING_RETRY_TIMEOUT", "3")
mlflow/tracing/export/mlflow.py:45
- Previously, AsyncTraceExportQueue was initialized with a client argument. If the client functionality is still required for proper trace export handling, consider either reintroducing the parameter or updating the queue implementation to remove the dependency on the client.
self._async_queue = AsyncTraceExportQueue()
Documentation preview for 7ae9ba4 will be available when this CircleCI job
More info
|
while not self._stop_event.is_set(): | ||
self._dispatch_task() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this clause necessary? it seems like even when stopped, we drain the queue
self._is_async = True | ||
if MLFLOW_ENABLE_ASYNC_LOGGING.is_set(): | ||
self._is_async = MLFLOW_ENABLE_ASYNC_LOGGING.get() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we document this env var above as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if so then maybe we can just add to environment_variables.py
with default value of True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm with a couple nits!
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
What changes are proposed in this pull request?
Enable async trace logging when exporting to Databricks trace server. This will be a default behavior and can opt-out by setting
MLFLOW_ENABLE_ASYNC_LOGGING
toFalse
explicitly.How is this PR tested?
Does this PR require documentation update?
I will add more documentation about Databricks external monitoring to https://www.mlflow.org/docs/latest/tracing/production in follow-up.
Release Notes
Is this a user-facing change?
Suport async trace export to Databricks and make it default behavior.
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yes
should be selected for bug fixes, documentation updates, and other small changes.No
should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.