Skip to content

Commit

Permalink
general: remove Terraform as deployment option
Browse files Browse the repository at this point in the history
Signed-off-by: Janusz Marcinkiewicz <[email protected]>
  • Loading branch information
VirrageS committed Jun 14, 2024
1 parent 68e98f4 commit 72fee04
Show file tree
Hide file tree
Showing 21 changed files with 20 additions and 276 deletions.
2 changes: 1 addition & 1 deletion api/env/ais.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ var (

// via ais-k8s repo
// see also:
// * https://github.com/NVIDIA/ais-k8s/blob/master/operator/pkg/resources/cmn/env.go
// * https://github.com/NVIDIA/ais-k8s/blob/main/operator/pkg/resources/cmn/env.go
// * docs/environment-vars.md
K8sPod: "MY_POD",
K8sNode: "MY_NODE",
Expand Down
101 changes: 0 additions & 101 deletions deploy/dev/terraform/README.md

This file was deleted.

108 changes: 0 additions & 108 deletions deploy/dev/terraform/main.tf

This file was deleted.

15 changes: 0 additions & 15 deletions deploy/dev/terraform/network.tf

This file was deleted.

8 changes: 0 additions & 8 deletions deploy/dev/terraform/scripts/deploy_ais.sh

This file was deleted.

6 changes: 0 additions & 6 deletions deploy/dev/terraform/scripts/wait_for_instance.sh

This file was deleted.

Empty file.
3 changes: 0 additions & 3 deletions deploy/dev/terraform/version.tf

This file was deleted.

4 changes: 2 additions & 2 deletions docs/_posts/2023-11-27-aistore-fast-tier.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ AIS features linear scalability with each added storage node - in fact, with eac

## Background and Requirements

AIStore's essential prerequisite is a Linux machine with disks. While not a requirement, a managed Kubernetes (K8s) environment is highly recommended to streamline [deployment](https://github.com/NVIDIA/ais-K8s/blob/master/docs/README.md) and management. Direct deployment on bare-metal instances is possible, but managed K8s is advised for efficiency and ease of use given the complexities associated with K8s management.
AIStore's essential prerequisite is a Linux machine with disks. While not a requirement, a managed Kubernetes (K8s) environment is highly recommended to streamline [deployment](https://github.com/NVIDIA/ais-K8s/blob/main/docs/README.md) and management. Direct deployment on bare-metal instances is possible, but managed K8s is advised for efficiency and ease of use given the complexities associated with K8s management.

In an AIS cluster, proxies (gateways) and targets (storage nodes) efficiently manage data requests from clients. When a client issues a GET request, a proxy, chosen randomly or specifically for load balancing, directs the request to an appropriate target based on the current cluster map. If the target has the data, it's directly sent to the client — a 'warm GET'. For unavailable data, AIS executes a 'cold GET', involving a series of steps: remote GET through the vendor's SDK, local storage of the object, validation of checksums for end-to-end protection (if enabled), storage of metadata (both local and remote, such as ETag, version, checksums, custom), making the object visible (only at this stage), and finally, creating additional copies or slices as per bucket properties, like n-way replication or erasure coding.

Expand Down Expand Up @@ -184,7 +184,7 @@ In summary, AIS demonstrated linear scalability in a particular setup, effective
1. GitHub:
- [AIStore](https://github.com/NVIDIA/aistore)
- [AIS-K8S](https://github.com/NVIDIA/ais-K8s)
- [Deploying AIStore on K8s](https://github.com/NVIDIA/ais-K8s/blob/master/docs/README.md)
- [Deploying AIStore on K8s](https://github.com/NVIDIA/ais-K8s/blob/main/docs/README.md)
2. Documentation, blogs, videos:
- https://aiatscale.org
- https://github.com/NVIDIA/aistore/tree/main/docs
Expand Down
6 changes: 3 additions & 3 deletions docs/_posts/2024-02-16-multihome-bench.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ The test setup used for benchmarking our AIS cluster with multihoming is shown b

## Deploying with Multihoming

For a full walkthrough of a multi-homed AIS deployment, check [the documentation in the AIS K8s repository](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/ais-deployment/docs/deploy_with_multihome.md).
For a full walkthrough of a multi-homed AIS deployment, check [the documentation in the AIS K8s repository](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/ais-deployment/docs/deploy_with_multihome.md).

Before taking advantage of AIS multihoming, the systems themselves must be configured with multiple IPs on multiple interfaces. In our case, this involved adding a second VNIC in OCI and configuring the OS routing rules using their provided scripts, following this OCI [guide](https://docs.oracle.com/iaas/compute-cloud-at-customer/topics/network/creating-and-attaching-a-secondary-vnic.htm).

Expand All @@ -60,7 +60,7 @@ By default, K8s pods do not allow multiple IPs. To add this, we'll need to use [

Once the additional hosts have been added to the hosts file and the network attachment definition has been created, all that is needed is a standard AIS deployment. The AIS [K8s operator](https://github.com/NVIDIA/ais-k8s/tree/master/operator) will take care of connecting each AIS pod to the specified additional hosts through Multus.

Below is a simple network diagram of how the AIS pods work with Multus in our cluster. We are using a macvlan bridge to connect the pod to the second interface. This is configured in the network attachment definition created by our `create_network_definition` playbook. AIS can also be configured to use other Multus network attachment definitions. See our [multihome deployment doc](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/ais-deployment/docs/deploy_with_multihome.md) and the Multus [usage guide](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) for details on using this playbook and configuring network attachment definitions.
Below is a simple network diagram of how the AIS pods work with Multus in our cluster. We are using a macvlan bridge to connect the pod to the second interface. This is configured in the network attachment definition created by our `create_network_definition` playbook. AIS can also be configured to use other Multus network attachment definitions. See our [multihome deployment doc](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/ais-deployment/docs/deploy_with_multihome.md) and the Multus [usage guide](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) for details on using this playbook and configuring network attachment definitions.

![Multus Network Diagram](/assets/multihome_bench/multus_diagram.png)

Expand Down Expand Up @@ -175,4 +175,4 @@ Even with a very large dataset under heavy load, the cluster maintained low late
- [AIStore](https://github.com/NVIDIA/aistore)
- [AIS-K8S](https://github.com/NVIDIA/ais-K8s)
- [AISLoader](https://github.com/NVIDIA/aistore/blob/main/docs/aisloader.md)
- [AIS as a Fast-Tier](https://aiatscale.org/blog/2023/11/27/aistore-fast-tier)
- [AIS as a Fast-Tier](https://aiatscale.org/blog/2023/11/27/aistore-fast-tier)
2 changes: 1 addition & 1 deletion docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ The example above may serve as a simple illustration whereby `t[fbarswQP]` becom

## References

* For Kubernetes deployment, please refer to a separate [ais-k8s](https://github.com/NVIDIA/ais-k8s) repository that also contains [AIS/K8s Operator](https://github.com/NVIDIA/ais-k8s/blob/master/operator/README.md) and its configuration-defining [resources](https://github.com/NVIDIA/ais-k8s/blob/master/operator/pkg/resources/cmn/config.go).
* For Kubernetes deployment, please refer to a separate [ais-k8s](https://github.com/NVIDIA/ais-k8s) repository that also contains [AIS/K8s Operator](https://github.com/NVIDIA/ais-k8s/blob/main/operator/README.md) and its configuration-defining [resources](https://github.com/NVIDIA/ais-k8s/blob/main/operator/pkg/resources/cmn/config.go).
* To configure an optional AIStore authentication server, run `$ AIS_AUTHN_ENABLED=true make deploy`. For information on AuthN server, please see [AuthN documentation](/docs/authn.md).
* AIS [CLI](/docs/cli.md) is an easy-to-use convenient command-line management/monitoring tool. To get started with CLI, run `make cli` (that generates `ais` executable) and follow the prompts.

Expand Down
2 changes: 1 addition & 1 deletion docs/environment-vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ t[fXbarEnn] 3.08% 367.66GiB 51% 8.414TiB [0.9 1.1
```

See related:
* [AIS K8s Operator: environment variables](https://github.com/NVIDIA/ais-k8s/blob/master/operator/pkg/resources/cmn/env.go)
* [AIS K8s Operator: environment variables](https://github.com/NVIDIA/ais-k8s/blob/main/operator/pkg/resources/cmn/env.go)

## AWS S3

Expand Down
9 changes: 2 additions & 7 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -412,13 +412,8 @@ In the software, _type of the deployment_ is also present in some minimal way. I
### Kubernetes deployments

For any Kubernetes deployments (including, of course, production deployments) please use a separate and dedicated [AIS-K8s GitHub](https://github.com/NVIDIA/ais-k8s/blob/master/docs/README.md) repository. The repo contains detailed [Ansible playbooks](https://github.com/NVIDIA/ais-k8s/tree/master/playbooks) that cover a variety of use cases and configurations.

In particular, [AIS-K8s GitHub repository](https://github.com/NVIDIA/ais-k8s/blob/master/terraform/README.md) provides a single-line command to deploy Kubernetes cluster and the underlying infrastructure with the AIStore cluster running inside (see below). The only requirement is having a few dependencies preinstalled (in particular, `helm`) and a Cloud account.

The following GIF illustrates the steps to deploy AIS on the Google Cloud Platform (GCP):

![Kubernetes cloud deployment](images/ais-k8s-deploy.gif)
For any Kubernetes deployments (including, of course, production deployments) please use a separate and dedicated [AIS-K8s GitHub](https://github.com/NVIDIA/ais-k8s/blob/main/docs/README.md) repository.
The repo contains detailed [Ansible playbooks](https://github.com/NVIDIA/ais-k8s/tree/main/playbooks) that cover a variety of use cases and configurations.

Finally, the [repository](https://github.com/NVIDIA/ais-k8s) hosts the [Kubernetes Operator](https://github.com/NVIDIA/ais-k8s/tree/master/operator) project that will eventually replace Helm charts and will become the main deployment, lifecycle, and operation management "vehicle" for AIStore.

Expand Down
Binary file removed docs/images/ais-k8s-deploy.gif
Binary file not shown.
8 changes: 4 additions & 4 deletions docs/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@ The question, then, is how to get the maximum out of the underlying hardware? Ho

Specifically, `sysctl` selected system variables, such as `net.core.wmem_max`, `net.core.rmem_max`, `vm.swappiness`, and more - here's the approximate list:

* [https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/host-config/vars/host_config_sysctl.yml](sysctl)
* [https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/host-config/vars/host_config_sysctl.yml](sysctl)

The document is part of a separate [repository](https://github.com/NVIDIA/ais-k8s) that serves the (specific) purposes of deploying AIS on **bare-metal Kubernetes**. The repo includes a number of [playbooks](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/README.md) to assist in a full deployment of AIStore.
The document is part of a separate [repository](https://github.com/NVIDIA/ais-k8s) that serves the (specific) purposes of deploying AIS on **bare-metal Kubernetes**. The repo includes a number of [playbooks](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/README.md) to assist in a full deployment of AIStore.

In particular, there is a section of pre-deployment playbooks to [prepare AIS nodes for deployment on bare-metal Kubernetes](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/host-config/README.md)
In particular, there is a section of pre-deployment playbooks to [prepare AIS nodes for deployment on bare-metal Kubernetes](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/host-config/README.md)

General references:

Expand Down Expand Up @@ -218,7 +218,7 @@ More: [Tune hard disk with `hdparm`](http://www.linux-magazine.com/Online/Featur
Another way to increase storage performance is to benchmark different filesystems: `ext`, `xfs`, `openzfs`.
Tuning the corresponding IO scheduler can prove to be important:

* [ais_enable_multiqueue](https://github.com/NVIDIA/ais-k8s/blob/master/playbooks/host-config/docs/ais_enable_multiqueue.md)
* [ais_enable_multiqueue](https://github.com/NVIDIA/ais-k8s/blob/main/playbooks/host-config/docs/ais_enable_multiqueue.md)

Other related references:

Expand Down
Loading

0 comments on commit 72fee04

Please sign in to comment.