Skip to content

Helm Chart Best practices. #119

Open
@gif30

Description

@gif30

Enhancement

I was testing karpor for the first time and i noticed the chart (at least) doesnt bring many of the best practices around charts. Although i could manage to use it i think it is missing some useful configuration. Maybe some are mismanaged by myself, but i couldn't find any easy out of the box config in the values chart

problems i found so far:

forced to use https on the container side

It is obviously great to have this implemented for many scenarios. But many of us would prefer to use our personal auth or certificates manged by the ingress, because is something that we as SREs are used to adress on most deployments. So i would add the posibility to access the service via http. This way we can delegate all the security and access by ourselves without adding overhead. I would add a value to enable/disable this.

lack of ingress configuration

Many charts have a disabled by default ingress in order to configure it directly on the same chart values. This one might be an interesting added implementation and is like a helm chart standard. I actually don't use this feature for complex scenarios, but is a nice addition.

Namespace and resource naming

There is a convention in the chart community to adress this.

  • {{ release name }}-{{ resource name }} which helps with visibility and compatibility when you deploy multiple instances in the same namespace/cluster.
  • the same applies when defining the namespace. You provice one when creating/updating/templating. In the Karpor chart values is hardcoded and it would be nice if it uses the same as the one provided with helm. Also the namespaceEnabled config is something to be handled by helm itself as well.

ArgoCD and job resyncing

The tool I used to deploy this is ArgoCD. I have not had any issue with deploying with it but using a plain Job, forces ArgoCD to resync it every time (if I not misundersand it is because ArgoCD doesnt like effimeral resources). This feature could be added as an init container sidecar instead of a job. This is not a very huge problem because it works, but it triggered alerts the whole time.

RBAC for self cluster monitoring

It would be nice to add a feature with an RBAC to handle automatically the monitoring of the cluster itself. so to have it working out of the box in the same cluster by default (with an enable/disable variable). I'm not sure if it is already adressed but i had to configure my cluster. maybe that is a missconfig so i will be adding my values file.

namespaceEnabled: false

# Configuration for Karpor server
server:
  resources:
    requests: {}
  enableRbac: false

# Configuration for Karpor syncer
syncer:
  # -- Resource limits and requests for the karpor syncer pods.
  resources:
    requests: {}

# Configuration for ElasticSearch
elasticsearch:
  # -- Resource limits and requests for the karpor elasticsearch pods.
  resources:
    requests: {}

# Configuration for ETCD
etcd:
  # -- Resource limits and requests for the karpor etcd pods.
  resources:
    requests: {}
  persistence:
    enabled: true
    # -- Size of etcd persistent volume
    size: 10Gi
    # -- Volume access mode, ReadWriteOnce means single node read-write access
    accessModes:
      - ReadWriteOnce

version:

  - name: karpor
    version: 0.7.6 # Chart version
    repository: https://kusionstack.github.io/charts/

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions