Cluster Proxy is a pluggable addon working on OCM, based on the extensibility provided by addon-framework, which automates the installation of apiserver-network-proxy on both hub cluster and managed clusters. The network proxy establishes reverse proxy tunnels from the managed cluster to the hub cluster to enable clients from the hub network to access services in the managed clusters' network even when all clusters are isolated in different VPCs.
Cluster Proxy consists of two components:
-
Addon-Manager: Manages the installation of proxy-servers (i.e., proxy ingress) in the hub cluster.
-
Addon-Agent: Manages the installation of proxy-agents for each managed cluster.
The overall architecture is shown below:
- OCM registration (>= 0.5.0)
- Adding helm repo:
$ helm repo add ocm https://openclustermanagement.blob.core.windows.net/releases/
$ helm repo update
$ helm search repo ocm/cluster-proxy
NAME CHART VERSION APP VERSION DESCRIPTION
ocm/cluster-proxy <..> 1.0.0 A Helm chart for Cluster-Proxy
- Install the helm chart:
$ helm install \
-n open-cluster-management-addon --create-namespace \
cluster-proxy ocm/cluster-proxy
$ kubectl -n open-cluster-management-cluster-proxy get pod
NAME READY STATUS RESTARTS AGE
cluster-proxy-5d8db7ddf4-265tm 1/1 Running 0 12s
cluster-proxy-addon-manager-778f6d679f-9pndv 1/1 Running 0 33s
...
- The addon will be automatically installed to your registered clusters, verify the addon installation:
$ kubectl get managedclusteraddon -A | grep cluster-proxy
NAMESPACE NAME AVAILABLE DEGRADED PROGRESSING
<your cluster> cluster-proxy True
By default, the proxy servers are running in gRPC mode so the proxy clients are expected to proxy through the tunnels by the konnectivity-client. Konnectivity is the underlying technique of Kubernetes' egress-selector feature and an example of konnectivity client is visible here.
Code-wise, proxying to the managed cluster can be accomplished by simply overriding the dialer of the Kubernetes original client config object, e.g.:
// instantiate a gRPC proxy dialer
tunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
context.TODO(),
<proxy service>,
grpc.WithTransportCredentials(grpccredentials.NewTLS(proxyTLSCfg)),
)
cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return err
}
// The managed cluster's name.
cfg.Host = clusterName
// Override the default tcp dialer
cfg.Dial = tunnel.DialContext
Here's the result of network bandwidth benchmarking via goben with and without Cluster-Proxy (i.e., Apiserver-Network-Proxy). The results show that proxying through the tunnel involves approximately 50% performance loss, so it's recommended to avoid transferring data-intensive traffic over the proxy.
Bandwidth | Direct | over Cluster-Proxy |
---|---|---|
Read/Mbps | 902 Mbps | 461 Mbps |
Write/Mbps | 889 Mbps | 428 Mbps |