diff --git a/api/env/ais.go b/api/env/ais.go index 0a7e59cefbe..7430a912299 100644 --- a/api/env/ais.go +++ b/api/env/ais.go @@ -52,7 +52,7 @@ var ( // via ais-k8s repo // see also: - // * https://github.com/NVIDIA/ais-k8s/blob/master/operator/pkg/resources/cmn/env.go + // * https://github.com/NVIDIA/ais-k8s/blob/main/operator/pkg/resources/cmn/env.go // * docs/environment-vars.md K8sPod: "MY_POD", K8sNode: "MY_NODE", diff --git a/deploy/dev/terraform/README.md b/deploy/dev/terraform/README.md deleted file mode 100644 index 3691e57f32f..00000000000 --- a/deploy/dev/terraform/README.md +++ /dev/null @@ -1,101 +0,0 @@ -# Terraform - AIS GCP Playground - -AIS can be run in bare VM or bare metal cluster. Here you can find simple `terraform` configuration which allows you to spin-up an Ubuntu VM on GCP with AIStore deployed. - -## Prerequisites -1. [Terraform (>= 0.12)](https://learn.hashicorp.com/tutorials/terraform/install-cli) -2. [GCP Project & Credentials](https://console.cloud.google.com/home/dashboard) -3. [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) - -## Deployment - -> **NOTE**: All commands should be executed from `deploy/dev/terraform` directory - -### Setting variables (terraform.tfvars) -1. Get the GCP `json` credentials [service account key page](https://console.cloud.google.com/apis/credentials/serviceaccountkey). -2. Update the `creds_file` variable to point to the downloaded `json` credentials file. -3. Set `project_id` to your GCP project ID. -4. Update the `ssh_private_file` and `ssh_public_file` to point to your `private` (e.g `~/.ssh/id_rsa`) and `public` (e.g `~/.ssh/id_rsa.pub`) keys. These are required to SSH into the deployed VM. If you wish to create a new ssh key pairs refer [this](https://www.ssh.com/ssh/keygen/) -5. Optionally, set the following variables: - - `region` (default: `us-central1`) : GCP deployment region - - `zone` (default: `us-central1-c`) : GCP deployment zone - - `ansible_file` (default: `ansible/setupnodes.yml`) : ansible playbook for setting up AIStore on VM - -#### Sample Configuration -Below is a sample variables file (terraform.tfvars) -``` -creds_file = "/home/ubuntu/creds/aistore-gcp.json" -project_id = "aistore-29227" -ssh_private_file = "~/.ssh/gcp" -ssh_public_file = "~/.ssh/gcp.pub" -``` - -### Useful commands - -After updating the variables execute the below commands: - -#### Initialize the terraform workspace -(to download the required provider plugins) -```console script -$ terraform init -Initializing the backend... - -Initializing provider plugins... -- Using previously-installed hashicorp/google v3.5.0 -- Using previously-installed hashicorp/null v2.1.2 - -Terraform has been successfully initialized! -$ -``` - -#### Provisioning VM and deploying AIStore -(this might take a few minutes) -```console -$ terraform apply -google_compute_network.vpc_network: Creating... -google_compute_network.vpc_network: Still creating... [10s elapsed] -google_compute_network.vpc_network: Still creating... [20s elapsed] -google_compute_network.vpc_network: Still creating... [30s elapsed] -google_compute_network.vpc_network: Still creating... [40s elapsed] -google_compute_network.vpc_network: Creation complete after 47s [id=projects/aistore-291017/global/networks/terraform-network] -google_compute_firewall.allow-ssh: Creating... -google_compute_instance.vm_instance: Creating... -google_compute_firewall.allow-ssh: Creation complete after 7s [id=projects/aistore-291017/global/firewalls/fw-allow-ssh] -google_compute_instance.vm_instance: Provisioning with 'remote-exec'... -.... TRUCATED .... -google_compute_instance.vm_instance (local-exec): Executing: ["/bin/sh" "-c" "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i ',' --private-key /root/.ssh/gcp ansible/setupnodes.yml"] - -google_compute_instance.vm_instance (local-exec): PLAY [all] ********************************************************************* - -google_compute_instance.vm_instance (local-exec): TASK [copy] ******************************************************************** -google_compute_instance.vm_instance (local-exec): changed: [] => (item=setupnodes.sh) - -google_compute_instance.vm_instance (local-exec): TASK [Execute the command in remote shell; stdout goes to the specified file on the remote.] *** - -google_compute_instance.vm_instance (local-exec): PLAY RECAP ********************************************************************* -google_compute_instance.vm_instance (local-exec): : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 - -google_compute_instance.vm_instance: Creation complete after 6m22s [id=projects/aistore-291017/zones/us-central1-c/instances/terraform-instance] - -Apply complete! Resources: 3 added, 0 changed, 0 destroyed. - -Outputs: - -external_ip = -``` - -#### Getting external IP address -```console -$ terraform output -external_ip = -``` - -`ssh` into the VM to check the installation -```console -$ ssh -i ubuntu@ -``` - -#### Destroying the VM -```console -$ terraform destroy -``` \ No newline at end of file diff --git a/deploy/dev/terraform/main.tf b/deploy/dev/terraform/main.tf deleted file mode 100644 index 398afbfe485..00000000000 --- a/deploy/dev/terraform/main.tf +++ /dev/null @@ -1,108 +0,0 @@ -terraform { - required_providers { - google = { - source = "hashicorp/google" - } - } -} - -variable "project_id" { - type = string - description = "project id" -} - -variable "region" { - type = string - default = "us-central1" - description = "region" -} - -variable "creds_file" { - type = string - description = "credentials json file" -} - -variable "ssh_public_file" { - type = string - description = "path to public key file" -} - -variable "ssh_private_file" { - type = string - description = "path to private key file" -} - -variable "zone" { - type = string - default = "us-central1-c" - description = "zone" -} - -variable "ansible_file" { - type = string - default = "../../prod/ansible/setupnodes.yml" - description = "path to ansible config (.yml) file" -} - -provider "google" { - version = "3.5.0" - credentials = file(var.creds_file) - project = var.project_id - region = var.region - zone = var.zone -} - -resource "google_compute_instance" "vm_instance" { - name = "${var.project_id}-instance" - machine_type = "g1-small" - scheduling { - preemptible = true - automatic_restart = false - } - - boot_disk { - initialize_params { - image = "ubuntu-1804-bionic-v20200317" - } - } - tags = ["ssh"] - - metadata = { - ssh-keys = "ubuntu:${file(var.ssh_public_file)}" - } - - network_interface { - network = google_compute_network.vpc_network.name - access_config { - } - } -} - -# Ansible AIStore deployment -resource "null_resource" "ansible" { - depends_on = [google_compute_instance.vm_instance] - - connection { - user = "ubuntu" - private_key = file(var.ssh_private_file) - host = google_compute_instance.vm_instance.network_interface.0.access_config.0.nat_ip - } - - # ensure instance is ready - provisioner "remote-exec" { - script = "scripts/wait_for_instance.sh" - } - - provisioner "local-exec" { - command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${google_compute_instance.vm_instance.network_interface.0.access_config.0.nat_ip},' --private-key ${var.ssh_private_file} ${var.ansible_file}" - } - - provisioner "remote-exec" { - script = "scripts/deploy_ais.sh" - } -} - -output "external_ip" { - value = google_compute_instance.vm_instance.network_interface.0.access_config.0.nat_ip - description = "external ip address of the instance" -} \ No newline at end of file diff --git a/deploy/dev/terraform/network.tf b/deploy/dev/terraform/network.tf deleted file mode 100644 index a584ca0a9c1..00000000000 --- a/deploy/dev/terraform/network.tf +++ /dev/null @@ -1,15 +0,0 @@ -# VPC -resource "google_compute_network" "vpc_network" { - name = "${var.project_id}-network" -} - -resource "google_compute_firewall" "allow-ssh" { - name = "${var.project_id}-allow-ssh" - network = google_compute_network.vpc_network.name - allow { - protocol = "tcp" - ports = ["22"] - } - target_tags = ["ssh"] - source_ranges = ["0.0.0.0/0"] -} diff --git a/deploy/dev/terraform/scripts/deploy_ais.sh b/deploy/dev/terraform/scripts/deploy_ais.sh deleted file mode 100644 index 3eae6a71b61..00000000000 --- a/deploy/dev/terraform/scripts/deploy_ais.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash - -. /etc/profile.d/aispaths.sh - -cd "${AISTORE_SRC}" -echo "Deploying AIStore: ${AIS_BACKEND_PROVIDERS}" -make kill deploy <<< $'1\n1\n1\nn\nn\nn\nn\n0\n' -make cli diff --git a/deploy/dev/terraform/scripts/wait_for_instance.sh b/deploy/dev/terraform/scripts/wait_for_instance.sh deleted file mode 100644 index 6a6a9852859..00000000000 --- a/deploy/dev/terraform/scripts/wait_for_instance.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash - -while [ ! -f /var/lib/cloud/instance/boot-finished ]; do - echo -e "Waiting for cloud-init..." - sleep 1 -done diff --git a/deploy/dev/terraform/terraform.tfvars b/deploy/dev/terraform/terraform.tfvars deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/deploy/dev/terraform/version.tf b/deploy/dev/terraform/version.tf deleted file mode 100644 index d9b6f790b92..00000000000 --- a/deploy/dev/terraform/version.tf +++ /dev/null @@ -1,3 +0,0 @@ -terraform { - required_version = ">= 0.12" -} diff --git a/docs/_posts/2023-11-27-aistore-fast-tier.md b/docs/_posts/2023-11-27-aistore-fast-tier.md index 8b50c45a161..49340ae1f2c 100644 --- a/docs/_posts/2023-11-27-aistore-fast-tier.md +++ b/docs/_posts/2023-11-27-aistore-fast-tier.md @@ -18,7 +18,7 @@ AIS features linear scalability with each added storage node - in fact, with eac ## Background and Requirements -AIStore's essential prerequisite is a Linux machine with disks. While not a requirement, a managed Kubernetes (K8s) environment is highly recommended to streamline [deployment](https://github.com/NVIDIA/ais-K8s/blob/master/docs/README.md) and management. Direct deployment on bare-metal instances is possible, but managed K8s is advised for efficiency and ease of use given the complexities associated with K8s management. +AIStore's essential prerequisite is a Linux machine with disks. While not a requirement, a managed Kubernetes (K8s) environment is highly recommended to streamline [deployment](https://github.com/NVIDIA/ais-K8s/blob/main/docs/README.md) and management. Direct deployment on bare-metal instances is possible, but managed K8s is advised for efficiency and ease of use given the complexities associated with K8s management. In an AIS cluster, proxies (gateways) and targets (storage nodes) efficiently manage data requests from clients. When a client issues a GET request, a proxy, chosen randomly or specifically for load balancing, directs the request to an appropriate target based on the current cluster map. If the target has the data, it's directly sent to the client — a 'warm GET'. For unavailable data, AIS executes a 'cold GET', involving a series of steps: remote GET through the vendor's SDK, local storage of the object, validation of checksums for end-to-end protection (if enabled), storage of metadata (both local and remote, such as ETag, version, checksums, custom), making the object visible (only at this stage), and finally, creating additional copies or slices as per bucket properties, like n-way replication or erasure coding. @@ -184,7 +184,7 @@ In summary, AIS demonstrated linear scalability in a particular setup, effective 1. GitHub: - [AIStore](https://github.com/NVIDIA/aistore) - [AIS-K8S](https://github.com/NVIDIA/ais-K8s) - - [Deploying AIStore on K8s](https://github.com/NVIDIA/ais-K8s/blob/master/docs/README.md) + - [Deploying AIStore on K8s](https://github.com/NVIDIA/ais-K8s/blob/main/docs/README.md) 2. Documentation, blogs, videos: - https://aiatscale.org - https://github.com/NVIDIA/aistore/tree/main/docs diff --git a/docs/_posts/2024-02-16-multihome-bench.md b/docs/_posts/2024-02-16-multihome-bench.md index 6daffd8ff41..b2c5e169f8c 100644 --- a/docs/_posts/2024-02-16-multihome-bench.md +++ b/docs/_posts/2024-02-16-multihome-bench.md @@ -41,7 +41,7 @@ The test setup used for benchmarking our AIS cluster with multihoming is shown b ## Deploying with Multihoming -For a full walkthrough of a multi-homed AIS deployment, check [the documentation in the AIS K8s repository](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/ais-deployment/docs/deploy_with_multihome.md). +For a full walkthrough of a multi-homed AIS deployment, check [the documentation in the AIS K8s repository](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/ais-deployment/docs/deploy_with_multihome.md). Before taking advantage of AIS multihoming, the systems themselves must be configured with multiple IPs on multiple interfaces. In our case, this involved adding a second VNIC in OCI and configuring the OS routing rules using their provided scripts, following this OCI [guide](https://docs.oracle.com/iaas/compute-cloud-at-customer/topics/network/creating-and-attaching-a-secondary-vnic.htm). @@ -60,7 +60,7 @@ By default, K8s pods do not allow multiple IPs. To add this, we'll need to use [ Once the additional hosts have been added to the hosts file and the network attachment definition has been created, all that is needed is a standard AIS deployment. The AIS [K8s operator](https://github.com/NVIDIA/ais-k8s/tree/master/operator) will take care of connecting each AIS pod to the specified additional hosts through Multus. -Below is a simple network diagram of how the AIS pods work with Multus in our cluster. We are using a macvlan bridge to connect the pod to the second interface. This is configured in the network attachment definition created by our `create_network_definition` playbook. AIS can also be configured to use other Multus network attachment definitions. See our [multihome deployment doc](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/ais-deployment/docs/deploy_with_multihome.md) and the Multus [usage guide](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) for details on using this playbook and configuring network attachment definitions. +Below is a simple network diagram of how the AIS pods work with Multus in our cluster. We are using a macvlan bridge to connect the pod to the second interface. This is configured in the network attachment definition created by our `create_network_definition` playbook. AIS can also be configured to use other Multus network attachment definitions. See our [multihome deployment doc](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/ais-deployment/docs/deploy_with_multihome.md) and the Multus [usage guide](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) for details on using this playbook and configuring network attachment definitions. ![Multus Network Diagram](/assets/multihome_bench/multus_diagram.png) @@ -175,4 +175,4 @@ Even with a very large dataset under heavy load, the cluster maintained low late - [AIStore](https://github.com/NVIDIA/aistore) - [AIS-K8S](https://github.com/NVIDIA/ais-K8s) - [AISLoader](https://github.com/NVIDIA/aistore/blob/main/docs/aisloader.md) -- [AIS as a Fast-Tier](https://aiatscale.org/blog/2023/11/27/aistore-fast-tier) \ No newline at end of file +- [AIS as a Fast-Tier](https://aiatscale.org/blog/2023/11/27/aistore-fast-tier) diff --git a/docs/configuration.md b/docs/configuration.md index 4cd3f861e60..514fe6fbe61 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -128,7 +128,7 @@ The example above may serve as a simple illustration whereby `t[fbarswQP]` becom ## References -* For Kubernetes deployment, please refer to a separate [ais-k8s](https://github.com/NVIDIA/ais-k8s) repository that also contains [AIS/K8s Operator](https://github.com/NVIDIA/ais-k8s/blob/master/operator/README.md) and its configuration-defining [resources](https://github.com/NVIDIA/ais-k8s/blob/master/operator/pkg/resources/cmn/config.go). +* For Kubernetes deployment, please refer to a separate [ais-k8s](https://github.com/NVIDIA/ais-k8s) repository that also contains [AIS/K8s Operator](https://github.com/NVIDIA/ais-k8s/blob/main/operator/README.md) and its configuration-defining [resources](https://github.com/NVIDIA/ais-k8s/blob/main/operator/pkg/resources/cmn/config.go). * To configure an optional AIStore authentication server, run `$ AIS_AUTHN_ENABLED=true make deploy`. For information on AuthN server, please see [AuthN documentation](/docs/authn.md). * AIS [CLI](/docs/cli.md) is an easy-to-use convenient command-line management/monitoring tool. To get started with CLI, run `make cli` (that generates `ais` executable) and follow the prompts. diff --git a/docs/environment-vars.md b/docs/environment-vars.md index cb19018f921..6412f5dfec1 100644 --- a/docs/environment-vars.md +++ b/docs/environment-vars.md @@ -132,7 +132,7 @@ t[fXbarEnn] 3.08% 367.66GiB 51% 8.414TiB [0.9 1.1 ``` See related: -* [AIS K8s Operator: environment variables](https://github.com/NVIDIA/ais-k8s/blob/master/operator/pkg/resources/cmn/env.go) +* [AIS K8s Operator: environment variables](https://github.com/NVIDIA/ais-k8s/blob/main/operator/pkg/resources/cmn/env.go) ## AWS S3 diff --git a/docs/getting_started.md b/docs/getting_started.md index f9b8c72b6b4..20d7a43f14e 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -412,13 +412,8 @@ In the software, _type of the deployment_ is also present in some minimal way. I ### Kubernetes deployments -For any Kubernetes deployments (including, of course, production deployments) please use a separate and dedicated [AIS-K8s GitHub](https://github.com/NVIDIA/ais-k8s/blob/master/docs/README.md) repository. The repo contains detailed [Ansible playbooks](https://github.com/NVIDIA/ais-k8s/tree/master/playbooks) that cover a variety of use cases and configurations. - -In particular, [AIS-K8s GitHub repository](https://github.com/NVIDIA/ais-k8s/blob/master/terraform/README.md) provides a single-line command to deploy Kubernetes cluster and the underlying infrastructure with the AIStore cluster running inside (see below). The only requirement is having a few dependencies preinstalled (in particular, `helm`) and a Cloud account. - -The following GIF illustrates the steps to deploy AIS on the Google Cloud Platform (GCP): - -![Kubernetes cloud deployment](images/ais-k8s-deploy.gif) +For any Kubernetes deployments (including, of course, production deployments) please use a separate and dedicated [AIS-K8s GitHub](https://github.com/NVIDIA/ais-k8s/blob/main/docs/README.md) repository. +The repo contains detailed [Ansible playbooks](https://github.com/NVIDIA/ais-k8s/tree/main/playbooks) that cover a variety of use cases and configurations. Finally, the [repository](https://github.com/NVIDIA/ais-k8s) hosts the [Kubernetes Operator](https://github.com/NVIDIA/ais-k8s/tree/master/operator) project that will eventually replace Helm charts and will become the main deployment, lifecycle, and operation management "vehicle" for AIStore. diff --git a/docs/images/ais-k8s-deploy.gif b/docs/images/ais-k8s-deploy.gif deleted file mode 100644 index eea9a065dd6..00000000000 Binary files a/docs/images/ais-k8s-deploy.gif and /dev/null differ diff --git a/docs/performance.md b/docs/performance.md index 4fa42592be8..7ea588478f8 100644 --- a/docs/performance.md +++ b/docs/performance.md @@ -48,11 +48,11 @@ The question, then, is how to get the maximum out of the underlying hardware? Ho Specifically, `sysctl` selected system variables, such as `net.core.wmem_max`, `net.core.rmem_max`, `vm.swappiness`, and more - here's the approximate list: -* [https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/host-config/vars/host_config_sysctl.yml](sysctl) +* [https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/host-config/vars/host_config_sysctl.yml](sysctl) -The document is part of a separate [repository](https://github.com/NVIDIA/ais-k8s) that serves the (specific) purposes of deploying AIS on **bare-metal Kubernetes**. The repo includes a number of [playbooks](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/README.md) to assist in a full deployment of AIStore. +The document is part of a separate [repository](https://github.com/NVIDIA/ais-k8s) that serves the (specific) purposes of deploying AIS on **bare-metal Kubernetes**. The repo includes a number of [playbooks](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/README.md) to assist in a full deployment of AIStore. -In particular, there is a section of pre-deployment playbooks to [prepare AIS nodes for deployment on bare-metal Kubernetes](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/host-config/README.md) +In particular, there is a section of pre-deployment playbooks to [prepare AIS nodes for deployment on bare-metal Kubernetes](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/host-config/README.md) General references: @@ -218,7 +218,7 @@ More: [Tune hard disk with `hdparm`](http://www.linux-magazine.com/Online/Featur Another way to increase storage performance is to benchmark different filesystems: `ext`, `xfs`, `openzfs`. Tuning the corresponding IO scheduler can prove to be important: -* [ais_enable_multiqueue](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/host-config/docs/ais_enable_multiqueue.md) +* [ais_enable_multiqueue](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/host-config/docs/ais_enable_multiqueue.md) Other related references: diff --git a/docs/sysfiles.md b/docs/sysfiles.md index 75415414527..f42d24df013 100644 --- a/docs/sysfiles.md +++ b/docs/sysfiles.md @@ -18,7 +18,7 @@ This section tries to enumerate the *system files* and briefly describe their re First, there's a *node configuration* usually derived from a single configuration template and populated at deployment time. * Local Playground: a single [configuration template](/deploy/dev/local/aisnode_config.sh) and [the script](/deploy/dev/local/deploy.sh) we use to populate it when we run the cluster locally on our development machines; -* [Production K8s deployment](https://github.com/NVIDIA/ais-k8s/tree/master): a set of [ansible playbooks](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/README.md) to automate creation of a node's configuration files via a [custom K8s operator](https://github.com/NVIDIA/ais-k8s/blob/master/operator/README.md) and to deploy multiple nodes across a K8s cluster. +* [Production K8s deployment](https://github.com/NVIDIA/ais-k8s/tree/master): a set of [ansible playbooks](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/README.md) to automate creation of a node's configuration files via a [custom K8s operator](https://github.com/NVIDIA/ais-k8s/blob/main/operator/README.md) and to deploy multiple nodes across a K8s cluster. The second category of *system files and directories* includes: diff --git a/docs/tutorials/etl/compute_md5.md b/docs/tutorials/etl/compute_md5.md index 661b4fc758c..5e7660514ab 100644 --- a/docs/tutorials/etl/compute_md5.md +++ b/docs/tutorials/etl/compute_md5.md @@ -18,9 +18,7 @@ Get ready! ## Prerequisites -* AIStore cluster deployed on Kubernetes. We recommend following guide below. - * [Deploy AIStore on local Kuberenetes cluster](https://github.com/NVIDIA/ais-k8s/blob/master/operator/README.md) - * [Deploy AIStore on the cloud](https://github.com/NVIDIA/ais-k8s/blob/master/terraform/README.md) +* AIStore cluster deployed on Kubernetes. We recommend following guide: [Deploy AIStore on local Kuberenetes cluster](https://github.com/NVIDIA/ais-k8s/blob/main/operator/README.md) ## Prepare ETL diff --git a/docs/tutorials/etl/etl_imagenet_pytorch.md b/docs/tutorials/etl/etl_imagenet_pytorch.md index a56e7197637..c262beac9d1 100644 --- a/docs/tutorials/etl/etl_imagenet_pytorch.md +++ b/docs/tutorials/etl/etl_imagenet_pytorch.md @@ -27,9 +27,7 @@ Here is a general overview of these steps: ## Prerequisites -* AIStore cluster deployed on Kubernetes. We recommend following guide below. - * [Deploy AIStore on local Kuberenetes cluster](https://github.com/NVIDIA/ais-k8s/blob/master/operator/README.md) - * [Deploy AIStore on the cloud](https://github.com/NVIDIA/ais-k8s/blob/master/terraform/README.md) +* AIStore cluster deployed on Kubernetes. We recommend following guide: [Deploy AIStore on local Kuberenetes cluster](https://github.com/NVIDIA/ais-k8s/blob/main/operator/README.md) ## Prepare dataset diff --git a/docs/tutorials/etl/etl_webdataset.md b/docs/tutorials/etl/etl_webdataset.md index 452ecb21da1..c00b6bce1fc 100644 --- a/docs/tutorials/etl/etl_webdataset.md +++ b/docs/tutorials/etl/etl_webdataset.md @@ -23,9 +23,7 @@ This tutorial consists of couple steps: ## Prerequisites -* AIStore cluster deployed on Kubernetes. We recommend following guide below. - * [Deploy AIStore on local Kuberenetes cluster](https://github.com/NVIDIA/ais-k8s/blob/master/operator/README.md) - * [Deploy AIStore on the cloud](https://github.com/NVIDIA/ais-k8s/blob/master/terraform/README.md) +* AIStore cluster deployed on Kubernetes. We recommend following guide: [Deploy AIStore on local Kuberenetes cluster](https://github.com/NVIDIA/ais-k8s/blob/main/operator/README.md) ## Prepare dataset diff --git a/docs/videos.md b/docs/videos.md index eea31310622..83f9b4b784b 100644 --- a/docs/videos.md +++ b/docs/videos.md @@ -28,10 +28,6 @@ redirect_from: {% include youtubePlayer.html id="VPIhQm2sMD8" %} -## Deployment: Google Cloud - -![ais-k8s-deploy](images/ais-k8s-deploy.gif) - ## Minimal All-In-One Standalone Docker * [README](/deploy/prod/docker/single/README.md) diff --git a/python/aistore/sdk/README.md b/python/aistore/sdk/README.md index f9834d559a5..df90bdd8750 100644 --- a/python/aistore/sdk/README.md +++ b/python/aistore/sdk/README.md @@ -90,8 +90,8 @@ Please note that certain operations do **not** support external cloud storage bu The SDK supports HTTPS connectivity if the AIS cluster is configured to use HTTPS. To start using HTTPS: -1. Set up HTTPS on your cluster: [Guide for K8s cluster](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/docs/ais_https_configuration.md) -2. If using a self-signed certificate with your own CA, copy the CA certificate to your local machine. If using our built-in cert-manager config to generate your certificates, you can use [our playbook](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/docs/ais_generate_https_cert.md) +1. Set up HTTPS on your cluster: [Guide for K8s cluster](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/docs/ais_https_configuration.md) +2. If using a self-signed certificate with your own CA, copy the CA certificate to your local machine. If using our built-in cert-manager config to generate your certificates, you can use [our playbook](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/docs/ais_generate_https_cert.md) 3. Options to configure the SDK for HTTPS connectivity: - Skip verification (for testing, insecure): - `client = Client(skip_verify=True)`