diff --git a/README.md b/README.md index ae5ff352..0d090ea3 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ The [Mountpoint for Amazon S3](https://github.com/awslabs/mountpoint-s3) Contain ## Features * **Static Provisioning** - Associate an externally-created S3 bucket with a [PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PV) for consumption within Kubernetes. -* **Mount Options** - Mount options can be specified in the PersistentVolume (PV) resource to define how the volume should be mounted. For Mountpoint for S3 specific options, take a look at the Mountpoint docs for [configuration](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md) and [semantics](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md). +* **Mount Options** - Mount options can be specified in the PersistentVolume (PV) resource to define how the volume should be mounted. For Mountpoint for S3 specific options, take a look at the [Mountpoint docs for configuration](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md) and [semantics](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md). ## Container Images | Driver Version | [ECR Public](https://gallery.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver) Image | @@ -46,6 +46,6 @@ The following table provides the support status for various distros with regards ## Documentation * [Driver Installation](docs/install.md) -* [Kubernetes Static Provisionin Example](/examples/kubernetes/static_provisioning) +* [Kubernetes Static Provisioning Example](/examples/kubernetes/static_provisioning) * [Driver Uninstallation](docs/install.md#uninstalling-the-driver) * [Development and Contributing](CONTRIBUTING.md) diff --git a/docs/cluster_setup.md b/docs/cluster_setup.md deleted file mode 100644 index 9723f9e8..00000000 --- a/docs/cluster_setup.md +++ /dev/null @@ -1,154 +0,0 @@ -# Cluster setup - -#### Set cluster-name and a region: -``` -export CLUSTER_NAME=s3-csi-cluster-2 -export REGION=eu-west-1 -``` - -#### Create cluster - -``` -eksctl create cluster \ - --name $CLUSTER_NAME \ - --region $REGION \ - --with-oidc \ - --ssh-access \ - --ssh-public-key my-key -``` - -#### Setup kubectl context - -> Ensure that you are using aws cli v2 before executing - -``` -aws eks update-kubeconfig --region $REGION --name $CLUSTER_NAME -``` - -# Configure access to S3 - -## From Amazon EKS - -EKS allows to use kubernetes service accounts to authenticate requests to S3, read more [here](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). To set this up follow the steps: - -#### Create a Kubernetes service account for the driver and attach the AmazonS3FullAccess AWS-managed policy to the service account: -> Notice that the same service account name `s3-csi-driver-sa` must be specified when creating a drivers pod (already in pod spec `deploy/kubernetes/base/node-daemonset.yaml`) - -``` -eksctl create iamserviceaccount \ - --name s3-csi-driver-sa \ - --namespace kube-system \ - --cluster $CLUSTER_NAME \ - --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess \ - --approve \ - --role-name AmazonS3CSIDriverFullAccess \ - --region $REGION -``` -#### [Optional] validate account was succesfully created -``` -kubectl describe sa s3-csi-driver-sa --namespace kube-system -``` - -For more validation steps read more [here](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html). - -## From on-premises k8s cluster - -For development purposes [long-term access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) may be used. Those may be delivered as a k8s secret through kustomize: put your access keys in `deploy/kubernetes/overlays/dev/credentials` file and use `kubectl apply -k deploy/kubernetes/overlays/dev` to deploy the driver. - -Usage of long-term credentials for production accounts/workloads is discouraged in favour of temporary credentials obtained through X.509 authentication scheme, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_non-aws.html). - -## Deploy the Driver -### Kustomize -#### Stable -``` -kubectl apply -k deploy/kubernetes/overlays/stable -``` -#### FOR DEVELOPERS ONLY [REMOVE BEFORE RELEASING] -Deploy using a registry in ECR (if you don't have one create a registry with default settings and name it `s3-csi-driver`) - -Change the registry destination: - - in the `Makefile` where it sets `REGISTRY?=` - - and the `/overlays/dev/kustomization.yaml` where the `newName` is set for the `image` - -Update the iam role in the node-serviceaccount.yaml - -Take the arn (should look something like `arn:aws:iam:::role/AmazonS3CSIDriverFullAccess`) of the iam role that was created above using the `eksctl create iamserviceaccount` command and set it in the `node-serviceaccount.yaml` file at the end for `eks.amazonaws.com/role-arn`. - -Build your image -``` -touch deploy/kubernetes/overlays/dev/credentials -make build_image TAG=latest PLATFORM=linux/amd64 -make login_registry -make push_image TAG=latest -``` -- this will use `:latest` tag which is pulled on every container recreation -- this will provide aws credentials specified in `deploy/kubernetes/overlays/dev/credentials` (file should exists, even if empty, created in first step of building the image) to the driver -``` -kubectl apply -k deploy/kubernetes/overlays/dev -``` -To redeploy driver with an updated image: -``` -kubectl rollout restart daemonset s3-csi-node -n kube-system -``` - -### Helm -[IN DEVELOPMENT MODE] Change the image reference that you're deploying the driver image to to your personal private ECR repo in the values.yaml file. -Install the aws-mountpoint-s3-csi-driver -``` -helm upgrade --install aws-mountpoint-s3-csi-driver --namespace kube-system ./charts/aws-mountpoint-s3-csi-driver --values ./charts/aws-mountpoint-s3-csi-driver/values.yaml -``` - -### Verifying installation - -Verify new version was deployed: -``` -$ kubectl get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -<...> -kube-system s3-csi-node-94mdh 3/3 Running 0 57s -kube-system s3-csi-node-vtgnq 3/3 Running 0 55s - -$ kubectl logs -f s3-csi-node-94mdh -n kube-system -<...> -I0922 12:11:20.465762 1 driver.go:51] Driver version: 0.1.0, Git commit: b36c8a52b999a48ca8b88f985ed862d54585f0dd, build date: 2023-09-22T11:58:15Z -<...> -``` - -To deploy the static provisioning example run: -``` -kubectl apply -f examples/kubernetes/static_provisioning/static_provisioning.yaml -``` - -To access the fs in the pod, run -``` -kubectl exec --stdin --tty fc-app --container app -- /bin/bash -``` - -## Cleanup -### Kustomize -Delete the pod -``` -kubectl delete -f examples/kubernetes/static_provisioning/static_provisioning.yaml -``` - -Note: If you use `kubectl delete -k deploy/kubernetes/overlays/dev` to delete the driver itself, it will also delete the service account. You can change the `node-serviceaccount.yaml` file to this to prevent having to re-connect it when deploying the driver next -``` ---- - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: s3-csi-driver-sa - labels: - app.kubernetes.io/name: aws-mountpoint-s3-csi-driver - app.kubernetes.io/managed-by: eksctl - annotations: - eks.amazonaws.com/role-arn: arn:aws:iam::151381207180:role/AmazonS3CSIDriverFullAccess # CHANGE THIS ARN -``` - -### Helm -Uninstall the driver -``` -helm uninstall aws-mountpoint-s3-csi-driver --namespace kube-system -``` -Note: This will not delete the service account. \ No newline at end of file diff --git a/docs/install.md b/docs/install.md index 5abe1671..a371d787 100644 --- a/docs/install.md +++ b/docs/install.md @@ -4,8 +4,6 @@ * Kubernetes Version >= 1.23 -* If you are using a self managed cluster, ensure the flag `--allow-privileged=true` for `kube-apiserver`. - ## Installation ### Cluster setup (optional) @@ -36,13 +34,44 @@ eksctl create cluster \ aws eks update-kubeconfig --region $REGION --name $CLUSTER_NAME ``` -### Set up driver permissions -The driver requires IAM permissions to talk to Amazon S3 to manage the volume on user's behalf. AWS maintains a managed policy, available at ARN `arn:aws:iam::aws:policy/AmazonS3FullAccess`. +### Configure access to S3 + +The driver requires IAM permissions to talk to Amazon S3 to manage the volume on user's behalf. We recommend using [Mountpoint's IAM permission policy](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#iam-permissions). Alternatively, you can use the AWS managed policy AmazonS3FullAccess, available at ARN `arn:aws:iam::aws:policy/AmazonS3FullAccess`, but this managed policy. For more details on creating a policy and an IAM role, review ["Creating an IAM policy"](https://docs.aws.amazon.com/eks/latest/userguide/s3-csi.html#s3-create-iam-policy) and ["Creating an IAM role"](https://docs.aws.amazon.com/eks/latest/userguide/s3-csi.html#s3-create-iam-role) from the EKS User Guide. + +The policy ARN will be referred to as `$ROLE_ARN` in the setup instructions and the name of the role will be `$ROLE_NAME`. + +#### From Amazon EKS + +EKS allows to use kubernetes service accounts to authenticate requests to S3, read more [here](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). To set this up follow the steps: + +##### Create a Kubernetes service account for the driver and attach the policy to the service account: +> Notice that the same service account name `s3-csi-driver-sa` must be specified when creating a drivers pod (already in pod spec `deploy/kubernetes/base/node-daemonset.yaml`) + +``` +eksctl create iamserviceaccount \ + --name s3-csi-driver-sa \ + --namespace kube-system \ + --cluster $CLUSTER_NAME \ + --attach-policy-arn $ROLE_ARN \ + --approve \ + --role-name $ROLE_NAME \ + --region $REGION +``` +##### [Optional] validate account was succesfully created +``` +kubectl describe sa s3-csi-driver-sa --namespace kube-system +``` + +For more validation steps read more [here](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html). + +#### From on-premises k8s cluster + +For development purposes [long-term access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) may be used. Those may be delivered as a k8s secret through kustomize: put your access keys in `deploy/kubernetes/overlays/dev/credentials` file and use `kubectl apply -k deploy/kubernetes/overlays/dev` to deploy the driver. -For more information, review ["Creating the Amazon Mountpoint for S3 CSI driver IAM role for service accounts" from the EKS User Guide.](TODO: add AWS docs link) +Usage of long-term credentials for production accounts/workloads is discouraged in favour of temporary credentials obtained through X.509 authentication scheme, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_non-aws.html). ### Deploy driver -You may deploy the Mountpoint for S3 CSI driver via Kustomize, Helm, or as an [Amazon EKS managed add-on]. +You may deploy the Mountpoint for S3 CSI driver via Kustomize, Helm, or as an [Amazon EKS managed add-on](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html#workloads-add-ons-available-eks). #### Kustomize ```sh