-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
don't re-triage all issues annually #32957
Comments
I agree. I love the re-triage effort, but if someone has taken the (rather extraordinary) step of freezing an issue, it's probably not the sort of thing that we need to re-triage. I think we have a signal-to-noise problem in a few places, this is one. |
+100. We get a batch of these, time warped in from the past, each week. The frozen ones just create noise. |
The idea of retriaging was to make sure the issue is not obsolete, and reprioritize as needed. If it's creating too much noise as is, maybe we should consider increasing the interval rather than removing it entirely? |
But frozen issues are intentionally opted into long term tracking to prevent auto-close, if people want to opt-in to identify if old issues are still valid that's great, but automatically re-triaging frozen issues seems excessive. We shouldn't be closing frozen issues without high confidence and these comments are always noise that buries real non-automated discussion, unless they result in a valid closure. GitHub does not and never has supported issues with a large volume of comments well, the cost of annually adding at least two comments with no additional information beyond "yes we didn't decide to close it again" seems pointless. If we want to re-validate issues we can look through older frozen issues without robot comments, anyone can just query for EDIT: Leaving them open has very little cost and almost no downsides. We're not really running into anywhere that the number of open issues is a constraint ..? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale cc @kubernetes/sig-contributor-experience (previously raised in slack) |
Hi @kubernetes/sig-contributor-experience, what would it take to discuss potentially acting on this recommendation? The configuration changes proposed are small and I think will incrementally improve signal-to-noise on the triage robot and avoid wasting time. |
Per @palnabarun: cc @kubernetes/sig-contributor-experience-leads 😅 |
+1 from me. |
gentle nudge @kubernetes/sig-contributor-experience-leads I've been participating in SIG API Machinery triage for a while now and aside from #34255 I've noticed a lot of difficult to resolve issues pointlessly getting a yearly robot comment burying the conversation. These issues are not something we should just close though because they do impact users and continue to be rediscovered and when we just close them we lose the previous efforts to root cause them As a good example: kubernetes/kubernetes#78946 is not trivial to resolve but it is a subtle bug that users frequently encounter and have to workaround. Keeping this sort of issue open and frozen without re-triage would save us time to work on the issues we can get to. (There are others linked above) |
+1 From @kubernetes/sig-contributor-experience-leads As this is a change that affects the whole project, we would like to send out a notification on dev@ to inform / seek a lazy consesus for a week on this topic |
(apologies for the extremely late response. It just occurred to me, so I thought I’d check / ask for feedback) since all the examples we've talked about so far are from the k/k repo, would it work if we diasble the current This way, we won’t change the "retriage cycles" for other [1] https://github.com/kubernetes/test-infra/blob/e180e9fe4a2fbfec38f4be10718b773a74405188/config/jobs/kubernetes/sig-k8s-infra/trusted/sig-contribex-k8s-triage-robot.yaml#L605-L752 |
I think not removing Currently the only deviation we have between orgs/repos is that some repos are opted out of the robot. For repos that haven't disabled the robot, the behavior is pretty uniform. Unless we hear specific objections/concerns I think we should keep it that way. I haven't seen a counterpoint anywhere yet where it's genuinely helpful to re-triage an issue that is either requesting help or explicitly frozen out of the lifecycle, but maybe we'll know differently when we forward the proposal to dev@. |
@BenTheElder - ack and thanks for the draft PR, and for sending the notification over to dev@... .
Ack, and agree with seeking objections/concerns on the mailing list notification. I've gone through the various discussion threads we have on this topic across k/k, k/test-infra, and I’m strongly +1 on a change that would help improve the k/k issue triaging situation. |
If it helps, as a Cluster API / controller-runtime maintainer I'm also in favor of this change. For me the current configuration just leads to a lot of unnecessary toil across a lot of repos. |
PR draft is held at #34321 Working on [email protected] request for feedback now. |
dev@ notified here: https://groups.google.com/a/kubernetes.io/g/dev/c/lbBYa4jA6xk |
We were discussing in the api-machinery triage meeting that there are some issues fitting a pattern like:
help-wanted
,lifecycle/frozen
,triage/accepted
Which are:
re-triaging them annually doesn't really add information, if anything it buries the actual discussion under bot comments +
/triage accepted
comments, closing them buries the context on why they haven't been resolved.I think we should only re-triage issues that are not frozen, or issues that are not frozen and marked help-wanted. (IMHO any frozen issue, but the former would still help)
Issues that are frozen + help-wanted are a searchable way to say "yes we know about this, but someone will have to step up and solve it, here is the context".
For example: kubernetes/kubernetes#104607
This issue is complicated to fix, but in the meantime it remains confusing and should be documented.
Currently the bot will annually remove triage/accepted from all issues, even ones with this label set that are effectively "this is known and we need help".
I strongly believe closing kubernetes/kubernetes#104607 is unhelpful and further buries this confusing behavior, but I also understand that none of us currently have the time to resolve it. So for now we're just re-triaged it again.
/sig contributor-experience
The text was updated successfully, but these errors were encountered: