Skip to content

Commit

Permalink
Address comments
Browse files Browse the repository at this point in the history
  • Loading branch information
cici37 committed Feb 5, 2025
1 parent 88eff69 commit 06bdc84
Showing 1 changed file with 13 additions and 4 deletions.
17 changes: 13 additions & 4 deletions keps/sig-api-machinery/5080-ordered-namespace-deletion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
- [User Stories (Optional)](#user-stories-optional)
- [Story 1 - Pod VS NetworkPolicy](#story-1---pod-vs-networkpolicy)
- [Story 2 - having finalizer conflicts with deletion order](#story-2---having-finalizer-conflicts-with-deletion-order)
- [Story 3 - having ValidatingAdmissionPolicy set up with parameter resources](#story-3---having-validatingadmissionpolicy-set-up-with-parameter-resources)
- [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)
- [Having ownerReference conflicts with deletion order](#having-ownerreference-conflicts-with-deletion-order)
- [Risks and Mitigations](#risks-and-mitigations)
Expand Down Expand Up @@ -154,17 +155,16 @@ Option 1: have int value assigned for resources to indicate the deletion priorit
Option 2: have the deletion priority bands defined instead of using numbers.

To begin with, the deletion order bands would be introduced:
- Workload Controllers
- Workloads
- Default
- Policies

Those deletion order bands will be deleted in sequence. E.g.

```
{Resource: "/pods", DeletionPriority: "workload"}
{Resource: "/pods", DeletionPriority: "workloads"}
{Resource: "networking.k8s.io/networkpolicies", DeletionPriority: "policies"}
{Resource: "apps.k8s.io/dloyments", DeletionPriority: "workload controllers"}
{Resource: "apps.k8s.io/dloyments", DeletionPriority: "workloads"}
...
```

Expand Down Expand Up @@ -199,6 +199,16 @@ it will cause dependency loops and block the deletion process.

Refer to the section `Handling Cyclic Dependencies`.

#### Story 3 - having ValidatingAdmissionPolicy set up with parameter resources

When ValidatingAdmissionPolicy is used in the cluster with parameterization, it is high chance that the VAP fits into `policies` deletion order
while the parameter resources are in `workload` or `default` deletion order. In this case, the parameter resources will be deleted before VAP and
lead the VAP unvaluable. To make it even worse, if the ValidatingAdmissionPolicyBinding is configured with `.spec.paramRef.parameterNotFoundAction: Deny`,
it could block certain resources operations and also hang the termination process.

It is an existing issue with random namespace deletion order as well. As long as we don't plan to have a dependency graph built, it will rely more on
best practices and user's configuration.

### Notes/Constraints/Caveats (Optional)

#### Having ownerReference conflicts with deletion order
Expand Down Expand Up @@ -248,7 +258,6 @@ We could possibly introduce a way to let individual instance be able to specify
### DeletionOrderPriority Mechanism

For the namespace deletion process, we would like to have the resources associated with this namespace be deleted in order of:
- Workload controllers
- Workload
- Default
- Policies
Expand Down

0 comments on commit 06bdc84

Please sign in to comment.