Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: Add environment variable controlling the log grooming frequency #46237

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

stefankeidel
Copy link

We've recently run into an issue where log grooming sidecar containers in our Airflow Kubernetes pods are incurring an absurd amount of transaction costs. That is because the log grooming process runs on a fixed, and for some use cases way too frequent schedule. We've hacked around it for now by overwriting the cleanup shell script in our custom Docker image, but I figured I would add a feature upstream that makes this configurable.

Let me know if there's anything I missed!


^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named {pr_number}.significant.rst or {issue_number}.significant.rst, in newsfragments.

Copy link

boring-cyborg bot commented Jan 29, 2025

Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
Here are some useful points:

  • Pay attention to the quality of your code (ruff, mypy and type annotations). Our pre-commits will help you with that.
  • In case of a new feature add useful documentation (in docstrings or in docs/ directory). Adding a new operator? Check this short guide Consider adding an example DAG that shows how users should use it.
  • Consider using Breeze environment for testing locally, it's a heavy docker but it ships with a working Airflow and a lot of integrations.
  • Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
  • Please follow ASF Code of Conduct for all communication including (but not limited to) comments on Pull Requests, Mailing list and Slack.
  • Be sure to read the Airflow Coding style.
  • Always keep your Pull Requests rebased, otherwise your build might fail due to changes not related to your commits.
    Apache Airflow is a community-driven project and together we are making it better 🚀.
    In case of doubts contact the developers at:
    Mailing List: [email protected]
    Slack: https://s.apache.org/airflow-slack

@stefankeidel
Copy link
Author

I am not entirely sure how the automation works that gets this change into the Dockerfile but I hope it's some CI/CD action. If there's anything else I need to run, let me know please :)

@nevcohen
Copy link
Contributor

Hi, looks good!

I would love to understand why it makes a difference if it runs every 15 minutes or every hour (for example)?

How does this affect the worker performance?

@stefankeidel
Copy link
Author

stefankeidel commented Jan 30, 2025

Hi, looks good!

I would love to understand why it makes a difference if it runs every 15 minutes or every hour (for example)?

How does this affect the worker performance?

Thanks for taking a look!

I personally wouldn't think it'd affect worker performance at all, since all they do is write to that file system. If anything it could see it improving because the volumes are not busy listing files all the time. But then again, I'm not super familiar with all the different types of setups Airflow can be run in.

In our case, extensive pruning on that shared volume was a) a major performance hog - we're dealing with Azure Blob Storage as backend, which is fairly slow to do a recursive find looking for last modified times and b) a major cost factor (ListBlob transactions get pricey). And our Airflow setup is not that large, maybe ~100 DAGs or so.

I broke one of the helm chart unit tests in the process here it seems. Will be attempting to deal with that this morning, but I need a working local setup first :)

@nevcohen
Copy link
Contributor

Hi, looks good!

I would love to understand why it makes a difference if it runs every 15 minutes or every hour (for example)?

How does this affect the worker performance?

Thanks for taking a look!

I personally wouldn't think it'd affect worker performance at all, since all they do is write to that file system. If anything it could see it improving because the volumes are not busy listing files all the time. But then again, I'm not super familiar with all the different types of setups Airflow can be run in.

In our case, extensive pruning on that shared volume was a) a major performance hog - we're dealing with Azure Blob Storage as backend, which is fairly slow to do a recursive find looking for last modified times and b) a major cost factor (ListBlob transactions get pricey). And our Airflow setup is not that large, maybe ~100 DAGs or so.

I broke one of the helm chart unit tests in the process here it seems. Will be attempting to deal with that this morning, but I need a working local setup first :)

Got it, thanks! I would love to do CR for you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants