diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000..e69de29 diff --git a/README.md b/README.md index b6df92a..6f9811a 100644 --- a/README.md +++ b/README.md @@ -1,208 +1,306 @@ -# Sharing a PostgreSQL database across clusters +# PostgreSQL Example -This tutorial demonstrates how to share a PostgreSQL database across multiple Kubernetes clusters that are located in different public and private cloud providers. +#### Sharing a PostgreSQL database across clusters -In this tutorial, you will create a Virtual Application Nework that enables communications across the public and private clusters. You will then deploy a PostgresSQL database instance to a private cluster and attach it to the Virtual Application Network. Thie will enable clients on different public clusters attached to the Virtual Application Nework to transparently access the database without the need for additional networking setup (e.g. no vpn or sdn required). +This example is part of a [suite of examples][examples] showing the +different ways you can use [Skupper][website] to connect services +across cloud providers, data centers, and edge sites. -To complete this tutorial, do the following: +[website]: https://skupper.io/ +[examples]: https://skupper.io/examples/index.html +#### Contents + +* [Overview](#overview) * [Prerequisites](#prerequisites) -* [Step 1: Set up the demo](#step-1-set-up-the-demo) -* [Step 2: Deploy the Virtual Application Network](#step-2-deploy-the-virtual-application-network) -* [Step 3: Deploy the PostgreSQL service](#step-3-deploy-the-postgresql-service) -* [Step 4: Expose the PostgreSQL deployment to the Virtual Application Network](#step-4-expose-the-postgresql-deployment-to-the-virtual-application-network) -* [Step 5: Deploy interactive Pod with Postgresql client utilities](#step-5-deploy-interactive-pod-with-postgresql-client-utilities) -* [Step 6: Create a Database, Create a Table, Insert Values](#step-6-create-a-database-create-a-table-insert-values) +* [Step 1: Access your clusters](#step-1-access-your-clusters) +* [Step 2: Set up namespaces](#step-2-set-up-namespaces) +* [Step 3: Deploy the Virtual Application Network](#step-3-deploy-the-virtual-application-network) +* [Step 4: Deploy the PostgreSQL service](#step-4-deploy-the-postgresql-service) +* [Step 5: Create Skupper service for the Virtual Application Network](#step-5-create-skupper-service-for-the-virtual-application-network) +* [Step 6: Bind the Skupper service to the deployment target on the Virtual Application Network](#step-6-bind-the-skupper-service-to-the-deployment-target-on-the-virtual-application-network) +* [Step 7: Create interactive pod with PostgreSQL client utilities](#step-7-create-interactive-pod-with-postgresql-client-utilities) +* [Step 8: Create a Database, Create a Table, Insert Values](#step-8-create-a-database-create-a-table-insert-values) +* [Summary](#summary) * [Cleaning up](#cleaning-up) * [Next steps](#next-steps) +## Overview + +This tutorial demonstrates how to share a PostgreSQL database across multiple Kubernetes clusters that are located in +different public and private cloud providers. + ## Prerequisites -* The `kubectl` command-line tool, version 1.15 or later ([installation guide](https://kubernetes.io/docs/tasks/tools/install-kubectl/)) -* The `skupper` command-line tool, version 0.7 or later ([installation guide](https://skupper.io/start/index.html#step-1-install-the-skupper-command-line-tool-in-your-environment)) +* The `kubectl` command-line tool, version 1.15 or later + ([installation guide][install-kubectl]) + +* The `skupper` command-line tool, the latest version ([installation + guide][install-skupper]) + +* Access to at least one Kubernetes cluster, from any provider you + choose + +[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[install-skupper]: https://skupper.io/install/index.html + +## Step 1: Access your clusters + +The methods for accessing your clusters vary by Kubernetes provider. +Find the instructions for your chosen providers and use them to +authenticate and configure access for each console session. See the +following links for more information: + +* [Minikube](https://skupper.io/start/minikube.html) +* [Amazon Elastic Kubernetes Service (EKS)](https://skupper.io/start/eks.html) +* [Azure Kubernetes Service (AKS)](https://skupper.io/start/aks.html) +* [Google Kubernetes Engine (GKE)](https://skupper.io/start/gke.html) +* [IBM Kubernetes Service](https://skupper.io/start/ibmks.html) +* [OpenShift](https://skupper.io/start/openshift.html) +* [More providers](https://kubernetes.io/partners/#kcsp) + +Console for _public1_: + +~~~ shell +export KUBECONFIG=~/.kube/public1 +~~~ + +Console for _public2_: + +~~~ shell +export KUBECONFIG=~/.kube/public2 +~~~ -The basis for the demonstration is to depict the operation of a PostgreSQL database in a private cluster and the ability to access the database from clients resident on other public clusters. As an example, the cluster deployment might be comprised of: +Console for _private_: -* A private cloud cluster running on your local machine -* Two public cloud clusters running in public cloud providers +~~~ shell +export KUBECONFIG=~/.kube/private1 +~~~ -While the detailed steps are not included here, this demonstration can alternatively be performed with three separate namespaces on a single cluster. +## Step 2: Set up namespaces -## Step 1: Set up the demo +Use `kubectl create namespace` to create the namespaces you wish to +use (or use existing namespaces). Use `kubectl config set-context` to +set the current namespace for each session. -1. On your local machine, make a directory for this tutorial and clone the example repo: +Console for _public1_: - ```bash - mkdir pg-demo - cd pg-demo - git clone https://github.com/skupperproject/skupper-example-postgresql.git - ``` +~~~ shell +kubectl create namespace public1 +kubectl config set-context --current --namespace public1 +~~~ -2. Prepare the target clusters. +Console for _public2_: - 1. On your local machine, log in to each cluster in a separate terminal session. - 2. In each cluster, create a namespace to use for the demo. - 3. In each cluster, set the kubectl config context to use the demo namespace [(see kubectl cheat sheet)](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) +~~~ shell +kubectl create namespace public2 +kubectl config set-context --current --namespace public2 +~~~ -## Step 2: Deploy the Virtual Application Network +Console for _private_: -On each cluster, define the virtual application network and the connectivity for the peer clusters. +~~~ shell +kubectl create namespace private1 +kubectl config set-context --current --namespace private1 +~~~ -1. In the terminal for the first public cluster, deploy the **public1** application router. Create connection token for connections from the **public2** cluster and the **private1** cluster: +## Step 3: Deploy the Virtual Application Network - ```bash - skupper init --site-name public1 - skupper token create --uses=2 public1-token.yaml - ``` +Creating a link requires use of two `skupper` commands in conjunction, +`skupper token create` and `skupper link create`. -2. In the terminal for the second public cluster, deploy the **public2** application router, create a connection token for a connection from the **private1** cluster and link to the **public1** cluster: +The `skupper token create` command generates a secret token that +signifies permission to create a link. The token also carries the +link details. Then, in a remote namespace, The `skupper link create` +command uses the token to create a link to the namespace that +generated it. - ```bash - skupper init --site-name public2 - skupper token create public2-token.yaml - skupper link create public1-token.yaml - ``` +**Note:** The link token is truly a *secret*. Anyone who has the +token can link to your namespace. Make sure that only those you trust +have access to it. -3. In the terminal for the private cluster, deploy the **private1** application router and create its links to the **public1** and **public2** cluster +First, use `skupper token create` in one namespace to generate the +token. Then, use `skupper link create` in the other to create a link. - ```bash - skupper init --site-name private1 - skupper link create public1-token.yaml - skupper link create public2-token.yaml - ``` +Console for _public1_: -4. In each of the cluster terminals, verify connectivity has been established +~~~ shell +skupper init --site-name public1 +skupper token create --uses 2 ~/public1-token.yaml +~~~ - ```bash - skupper link status - ``` +Console for _public2_: -## Step 3: Deploy the PostgreSQL service +~~~ shell +skupper init --site-name public2 +skupper token create ~/public2-token.yaml +skupper link create ~/public1-token.yaml +skupper link status --wait 30 +~~~ -After creating the application router network, deploy the PostgreSQL service. The **private1** cluster will be used to deploy the PostgreSQL server and the **public1** and **public2** clusters will be used to enable client communications to the server on the **private1** cluster. +Console for _private_: -1. In the terminal for the **private1** cluster where the PostgreSQL server will be created, deploy the following: +~~~ shell +skupper init --site-name private1 +skupper link create ~/public1-token.yaml +skupper link create ~/public2-token.yaml +skupper link status --wait 30 +~~~ - ```bash - kubectl apply -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml - ``` +If your console sessions are on different machines, you may need to +use `scp` or a similar tool to transfer the token. -## Step 4: Create Skupper service for the Virtual Application Network +## Step 4: Deploy the PostgreSQL service -1. In the terminal for the **private1** cluster, expose the postgresql-svc service: +After creating the application router network, deploy the PostgreSQL service. +The **private1** cluster will be used to deploy the PostgreSQL server and the **public1** +and **public2** clusters will be used to enable client communications to the server on +the **private1** cluster. - ```bash - skupper service create postgresql 5432 - ``` +Console for _private_: -2. In each of the cluster terminals, verify the service created is present +~~~ shell +mkdir pg-demo +cd pg-demo +git clone https://github.com/skupperproject/skupper-example-postgresql.git +kubectl apply -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml +~~~ - ```bash - skupper service status - ``` +## Step 5: Create Skupper service for the Virtual Application Network - Note that the mapping for the service address defaults to `tcp`. +Console for _private_: -## Step 5: Bind the Skupper service to the deployment target on the Virtual Application Network +~~~ shell +skupper service create postgresql 5432 +~~~ -1. In the terminal for the **private1** cluster, expose the postgresql-svc service: +Console for _public1_: - ```bash - skupper service bind postgresql deployment postgresql - ``` +~~~ shell +skupper service status +~~~ -2. In the **private1** cluster terminal, verify the service bind to the target +Console for _public2_: - ```bash - skupper service status - ``` +~~~ shell +skupper service status +~~~ - Note that the **private1** is the only cluster to provide a target. +Note that the mapping for the service address defaults to `tcp`. -## Step 6: Create interactive pod with PostgreSQL client utilities +## Step 6: Bind the Skupper service to the deployment target on the Virtual Application Network -1. From each cluster terminial, create a pod that contains the PostgreSQL client utilities: +Console for _private_: - ```bash - kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg \ - --env="PGUSER=postgres" \ - --env="PGPASSWORD=skupper" \ - --env="PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')" \ - -- bash - ``` +~~~ shell +skupper service bind postgresql deployment postgresql +~~~ -2. Note that if the session is ended, it can be resumed with the following: +Console for _public1_: - ```bash - kubectl attach pg-shell -c pg-shell -i -t - ``` +~~~ shell +skupper service status +~~~ -## Step 7: Create a Database, Create a Table, Insert Values +Console for _public2_: + +~~~ shell +skupper service status +~~~ + +Note that the **private1** is the only cluster to provide a target. + +## Step 7: Create interactive pod with PostgreSQL client utilities + +Console for _private_: + +~~~ shell +kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env="PGUSER=postgres" --env="PGPASSWORD=skupper" --env="PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')" -- bash +~~~ + +Console for _public1_: + +~~~ shell +kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env="PGUSER=postgres" --env="PGPASSWORD=skupper" --env="PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')" -- bash +~~~ + +Console for _public2_: + +~~~ shell +kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env="PGUSER=postgres" --env="PGPASSWORD=skupper" --env="PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')" -- bash +~~~ + +Note that if the session is ended, it can be resumed with the following: + ```bash + kubectl attach pg-shell -c pg-shell -i -t + ``` + +## Step 8: Create a Database, Create a Table, Insert Values Using the 'pg-shell' pod running on each cluster, operate on the database: -1. Create a database called 'markets' from the **private1** cluster +Console for _private_: + +~~~ shell +createdb -e markets +~~~ - ```bash - bash-5.0$ createdb -e markets - ``` +Console for _public1_: -2. Create a table called 'product' in the 'markets' database from the **public1** cluster +~~~ shell +psql -d markets +create table if not exists product (id SERIAL, name VARCHAR(100) NOT NULL, sku CHAR(8)); +~~~ - ```bash - bash-5.0$ psql -d markets - markets# create table if not exists product ( - id SERIAL, - name VARCHAR(100) NOT NULL, - sku CHAR(8) - ); - ``` +Console for _public2_: -3. Insert values into the `product` table in the `markets` database from the **public2** cluster: +~~~ shell +psql -d markets +INSERT INTO product VALUES(DEFAULT, 'Apple, Fuji', '4131'); +INSERT INTO product VALUES(DEFAULT, 'Banana', '4011'); +INSERT INTO product VALUES(DEFAULT, 'Pear, Bartlett', '4214'); +INSERT INTO product VALUES(DEFAULT, 'Orange', '4056'); +~~~ - ```bash - bash-5.0$ psql -d markets - markets# INSERT INTO product VALUES(DEFAULT, 'Apple, Fuji', '4131'); - markets# INSERT INTO product VALUES(DEFAULT, 'Banana', '4011'); - markets# INSERT INTO product VALUES(DEFAULT, 'Pear, Bartlett', '4214'); - markets# INSERT INTO product VALUES(DEFAULT, 'Orange', '4056'); - ``` +From any cluster, access the `product` tables in the `markets` database to view contents. -4. From any cluster, access the `product` tables in the `markets` database to view contents +## Summary - ```bash - bash-5.0$ psql -d markets - markets# SELECT * FROM product; - ``` +In this tutorial, we have seen how to create a Virtual Application Nework that enables communications +across the public and private clusters. We deployed a PostgresSQL database +instance into a private cluster and attached it to the Virtual Application Network. +This configuration enabled the clients from different public clusters attached to the Virtual Application +Network to transparently access the database without the need of additional networking setup +(e.g. no vpn or firewall rules required). -## Cleaning Up +## Cleaning up -Restore your cluster environment by returning the resources created in the demonstration. On each cluster, delete the demo resources and the virtual application network: +To remove Skupper and the other resources from this exercise, use the +following commands. -1. In the terminal for the **public1** cluster, delete the resources: +Console for _public1_: - ```bash - $ kubectl delete pod pg-shell - $ skupper delete - ``` +~~~ shell +skupper delete +kubectl delete pod pg-shell +~~~ -2. In the terminal for the **public2** cluster, delete the resources: +Console for _public2_: - ```bash - $ kubectl delete pod pg-shell - $ skupper delete - ``` +~~~ shell +skupper delete +kubectl delete pod pg-shell +~~~ -3. In the terminal for the **private1** cluster, delete the resources: +Console for _private_: - ```bash - $ kubectl delete pod pg-shell - $ skupper unexpose deployment postgresql - $ kubectl delete -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml - $ skupper delete - ``` +~~~ shell +kubectl delete pod pg-shell +skupper unexpose deployment postgresql +kubectl delete -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml +skupper delete +~~~ ## Next steps - - [Try the example for multi-cluster MongoDB replica set deployment](https://github.com/skupperproject/skupper-example-mongodb-replica-set) - - [Find more examples](https://skupper.io/examples/) +Check out the other [examples][examples] on the Skupper website. diff --git a/skewer.yaml b/skewer.yaml new file mode 100644 index 0000000..e09666a --- /dev/null +++ b/skewer.yaml @@ -0,0 +1,141 @@ +title: "PostgreSQL Example" +subtitle: "Sharing a PostgreSQL database across clusters" +github_actions_url: "https://github.com/skupperproject/skupper-example-postgresql/actions/workflows/main.yaml" +overview: | + This tutorial demonstrates how to share a PostgreSQL database across multiple Kubernetes clusters that are located in + different public and private cloud providers. +prerequisites: !string prerequisites +sites: + private1: + kubeconfig: ~/.kube/private1 + namespace: private + public1: + kubeconfig: ~/.kube/public1 + namespace: public1 + public2: + kubeconfig: ~/.kube/public2 + namespace: public2 +steps: + - title: Access your clusters + preamble: !string access_your_clusters_preamble + commands: + public1: + - run: export KUBECONFIG=~/.kube/public1 + public2: + - run: export KUBECONFIG=~/.kube/public2 + private1: + - run: export KUBECONFIG=~/.kube/private1 + - title: Set up namespaces + preamble: !string set_up_your_namespaces_preamble + commands: + public1: + - run: kubectl create namespace public1 + - run: kubectl config set-context --current --namespace public1 + public2: + - run: kubectl create namespace public2 + - run: kubectl config set-context --current --namespace public2 + private1: + - run: kubectl create namespace private1 + - run: kubectl config set-context --current --namespace private1 + - title: Deploy the Virtual Application Network + preamble: !string link_your_namespaces_preamble + commands: + public1: + - run: skupper init --site-name public1 + - run: skupper token create --uses 2 ~/public1-token.yaml + public2: + - run: skupper init --site-name public2 + - run: skupper token create ~/public2-token.yaml + - run: skupper link create ~/public1-token.yaml + - run: skupper link status --wait 30 + private1: + - run: skupper init --site-name private1 + - run: skupper link create ~/public1-token.yaml + - run: skupper link create ~/public2-token.yaml + - run: skupper link status --wait 30 + postamble: !string link_your_namespaces_postamble + - title: Deploy the PostgreSQL service + preamble: | + After creating the application router network, deploy the PostgreSQL service. + The **private1** cluster will be used to deploy the PostgreSQL server and the **public1** + and **public2** clusters will be used to enable client communications to the server on + the **private1** cluster. + commands: + private1: + - run: mkdir pg-demo + - run: cd pg-demo + - run: git clone https://github.com/skupperproject/skupper-example-postgresql.git + - run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml + await: [deployment/postgresql] + - title: Create Skupper service for the Virtual Application Network + commands: + private1: + - run: skupper service create postgresql 5432 + public1: + - run: skupper service status + public2: + - run: skupper service status + postamble: | + Note that the mapping for the service address defaults to `tcp`. + - title: Bind the Skupper service to the deployment target on the Virtual Application Network + commands: + private1: + - run: skupper service bind postgresql deployment postgresql + public1: + - run: skupper service status + public2: + - run: skupper service status + postamble: | + Note that the **private1** is the only cluster to provide a target. + - title: Create interactive pod with PostgreSQL client utilities + commands: + private1: + - run: "kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env=\"PGUSER=postgres\" --env=\"PGPASSWORD=skupper\" --env=\"PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')\" -- bash" + public1: + - run: "kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env=\"PGUSER=postgres\" --env=\"PGPASSWORD=skupper\" --env=\"PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')\" -- bash" + public2: + - run: "kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env=\"PGUSER=postgres\" --env=\"PGPASSWORD=skupper\" --env=\"PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')\" -- bash" + postamble: | + Note that if the session is ended, it can be resumed with the following: + ```bash + kubectl attach pg-shell -c pg-shell -i -t + ``` + - title: Create a Database, Create a Table, Insert Values + preamble: | + Using the 'pg-shell' pod running on each cluster, operate on the database: + commands: + private1: + - run: createdb -e markets + public1: + - run: psql -d markets + - run: "create table if not exists product (id SERIAL, name VARCHAR(100) NOT NULL, sku CHAR(8));" + public2: + - run: psql -d markets + - run: "INSERT INTO product VALUES(DEFAULT, 'Apple, Fuji', '4131');" + - run: "INSERT INTO product VALUES(DEFAULT, 'Banana', '4011');" + - run: "INSERT INTO product VALUES(DEFAULT, 'Pear, Bartlett', '4214');" + - run: "INSERT INTO product VALUES(DEFAULT, 'Orange', '4056');" + postamble: | + From any cluster, access the `product` tables in the `markets` database to view contents. +summary: | + In this tutorial, we have seen how to create a Virtual Application Nework that enables communications + across the public and private clusters. We deployed a PostgresSQL database + instance into a private cluster and attached it to the Virtual Application Network. + This configuration enabled the clients from different public clusters attached to the Virtual Application + Network to transparently access the database without the need of additional networking setup + (e.g. no vpn or firewall rules required). +cleaning_up: + preamble: !string cleaning_up_preamble + commands: + public1: + - run: skupper delete + - run: kubectl delete pod pg-shell + public2: + - run: skupper delete + - run: kubectl delete pod pg-shell + private1: + - run: kubectl delete pod pg-shell + - run: skupper unexpose deployment postgresql + - run: kubectl delete -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml + - run: skupper delete +next_steps: !string next_steps diff --git a/subrepos/skewer/.github/workflows/main.yaml b/subrepos/skewer/.github/workflows/main.yaml new file mode 100644 index 0000000..d09655a --- /dev/null +++ b/subrepos/skewer/.github/workflows/main.yaml @@ -0,0 +1,19 @@ +name: main +on: [push, pull_request] +jobs: + test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - uses: actions/setup-python@v2 + - uses: manusa/actions-setup-minikube@v2.4.2 + with: + minikube version: 'v1.22.0' + kubernetes version: 'v1.21.2' + github token: ${{ secrets.GITHUB_TOKEN }} + - run: mkdir -p "$HOME/.local/bin" + - run: echo "$HOME/.local/bin" >> $GITHUB_PATH + - run: echo "SKUPPER_URL=$(curl -sL https://api.github.com/repos/skupperproject/skupper/releases/latest | jq -r '.assets[] | select(.browser_download_url | contains("linux-amd64")) | .browser_download_url')" >> $GITHUB_ENV + - run: curl -sL $SKUPPER_URL | tar -C "$HOME/.local/bin" -xzf - + - run: skupper version + - run: ./plano test diff --git a/subrepos/skewer/.gitignore b/subrepos/skewer/.gitignore new file mode 100644 index 0000000..3368b7b --- /dev/null +++ b/subrepos/skewer/.gitignore @@ -0,0 +1,2 @@ +__pycache__/ +README.html diff --git a/subrepos/skewer/.gitrepo b/subrepos/skewer/.gitrepo new file mode 100644 index 0000000..021cf83 --- /dev/null +++ b/subrepos/skewer/.gitrepo @@ -0,0 +1,12 @@ +; DO NOT EDIT (unless you know what you are doing) +; +; This subdirectory is a git "subrepo", and this file is maintained by the +; git-subrepo command. See https://github.com/git-commands/git-subrepo#readme +; +[subrepo] + remote = https://github.com/skupperproject/skewer + branch = main + commit = 3c2a9559fd51f392af7f9a0a78b0e1fc0535201c + parent = ae39d93b1e5a628230c89ff332974e62d70caf67 + method = merge + cmdver = 0.4.3 diff --git a/subrepos/skewer/.planofile b/subrepos/skewer/.planofile new file mode 100644 index 0000000..cfedd59 --- /dev/null +++ b/subrepos/skewer/.planofile @@ -0,0 +1,23 @@ +from skewer import * + +@command +def generate(app): + """ + Generate README.md from the data in skewer.yaml + """ + with working_dir("test-example"): + generate_readme("skewer.yaml", "README.md") + print(read("README.md")) + +@command +def test(app): + with working_dir("test-example"): + generate_readme("skewer.yaml", "README.md") + check_file("README.md") + run_steps_on_minikube("skewer.yaml") + +@command +def render(app): + check_program("pandoc") + run(f"pandoc -o README.html README.md") + print(f"file:{get_real_path('README.html')}") diff --git a/subrepos/skewer/LICENSE.txt b/subrepos/skewer/LICENSE.txt new file mode 100644 index 0000000..e06d208 --- /dev/null +++ b/subrepos/skewer/LICENSE.txt @@ -0,0 +1,202 @@ +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/subrepos/skewer/README.md b/subrepos/skewer/README.md new file mode 100644 index 0000000..2c4f6bc --- /dev/null +++ b/subrepos/skewer/README.md @@ -0,0 +1,182 @@ +# Skewer + +[![main](https://github.com/skupperproject/skewer/actions/workflows/main.yaml/badge.svg)](https://github.com/skupperproject/skewer/actions/workflows/main.yaml) + +A library for documenting and testing Skupper examples + +A `skewer.yaml` file describes the steps and commands to achieve an +objective using Skupper. Skewer takes the `skewer.yaml` file as input +and produces a `README.md` file and a test routine as output. + +## An example example + +[Example `skewer.yaml` file](test-example/skewer.yaml) + +[Example `README.md` output](test-example/README.md) + +[Example generate and test functions](test-example/Planofile) + +## Setting up Skewer for your own example + +Add the Skewer code as a subrepo in your project: + + cd project-dir/ + git subrepo clone https://github.com/skupperproject/skewer subrepos/skewer + +Symlink the Skewer libraries into your `python` directory: + + mkdir -p python + ln -s ../subrepos/skewer/python/skewer.strings python/skewer.strings + ln -s ../subrepos/skewer/python/skewer.py python/skewer.py + ln -s ../subrepos/skewer/python/plano.py python/plano.py + +Symlink the `plano` command into the root of your project. Copy the +example `Planofile` there as well: + + ln -s subrepos/skewer/plano + cp subrepos/skewer/test-example/Planofile . + +Use your editor to create a `skewer.yaml` file: + + emacs skewer.yaml + +Run the `./plano` command to see what you can do: generate the +README and test your example. + + ./plano + +## Installing Git Subrepo on Fedora + + dnf install git-subrepo + +## Updating a Skewer subrepo inside your example project + +Usually this will do what you want: + + git subrepo pull subrepos/skewer + +If you made changes to the Skewer subrepo, the command above will ask +you to perform a merge. You can use the procedure that the subrepo +tooling offers, but if you'd prefer to simply blow away your changes +and get the latest Skewer, you can use the following procedure: + + git subrepo clean subrepos/skewer + git rm -rf subrepos/skewer/ + git commit -am "Temporarily remove the previous version of Skewer" + git subrepo clone https://github.com/skupperproject/skewer subrepos/skewer + +You should also be able to use `git subrepo pull --force`, to achieve +the same, but it didn't work with my version of Git Subrepo. + +## Skewer YAML + +The top level: + +~~~ yaml +title: # Your example's title (required) +subtitle: # Your chosen subtitle (required) +github_actions_url: # The URL of your workflow (optional) +overview: # Text introducing your example (optional) +prerequisites: # Text describing prerequisites (optional) +sites: # A map of named sites. See below. +steps: # A list of steps. See below. +summary: # Text to summarize what the user did (optional) +cleaning_up: # A special step for cleaning up (optional) +next_steps: # Text linking to more examples (optional) +~~~ + +A site: + +~~~ yaml +: + kubeconfig: # (required) + namespace: # (required) +~~~ + +An example site: + +~~~ yaml +sites: + east: + kubeconfig: ~/.kube/config-east + namespace: east +~~~ + +A step: + +~~~ yaml +title: # The step title (required) +preamble: # Text before the commands (optional) +commands: # Named groups of commands. See below. +postamble: # Text after the commands (optional) +~~~ + +The step commands are separated into named groups corresponding to the +sites. Each named group contains a list of command entries. Each +command entry has a `run` field containing a shell command and other +fields for awaiting completion or providing sample output. + +~~~ yaml +commands: + east: + - run: echo Hello + output: Hello +~~~ + +An example step: + +~~~ yaml +steps: + - title: Expose the frontend service + preamble: | + We have established connectivity between the two namespaces and + made the backend in `east` available to the frontend in `west`. + Before we can test the application, we need external access to + the frontend. + + Use `kubectl expose` with `--type LoadBalancer` to open network + access to the frontend service. Use `kubectl get services` to + check for the service and its external IP address. + commands: + west: + - run: kubectl expose deployment/hello-world-frontend --port 8080 --type LoadBalancer + await_external_ip: [service/hello-world-frontend] + output: | + service/hello-world-frontend exposed + - run: kubectl get services + output: | + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-world-backend ClusterIP 10.102.112.121 8080/TCP 30s + hello-world-frontend LoadBalancer 10.98.170.106 10.98.170.106 8080:30787/TCP 2s + skupper LoadBalancer 10.101.101.208 10.101.101.208 8080:31494/TCP 82s + skupper-router LoadBalancer 10.110.252.252 10.110.252.252 55671:32111/TCP,45671:31193/TCP 86s + skupper-router-local ClusterIP 10.96.123.13 5671/TCP 86s +~~~ + +Or you can use a named, canned step from the library of standard +steps: + +~~~ yaml +standard: configure_separate_console_sessions +~~~ + +The initial steps are usually standard ones, so you may be able to use +this: + +~~~ yaml +steps: + - standard: configure_separate_console_sessions + - standard: access_your_clusters + - standard: set_up_your_namespaces + - standard: install_skupper_in_your_namespaces + - standard: check_the_status_of_your_namespaces + [...] +~~~ + +Skewer has boilerplate strings for a lot of cases. You can see what's +there in the `skewer.strings` file. To include a string, use the +`!string` directive. + +~~~ yaml +next_steps: !string next_steps +~~~ diff --git a/subrepos/skewer/plano b/subrepos/skewer/plano new file mode 120000 index 0000000..41ce1be --- /dev/null +++ b/subrepos/skewer/plano @@ -0,0 +1 @@ +subrepos/plano/bin/plano \ No newline at end of file diff --git a/subrepos/skewer/python/plano.py b/subrepos/skewer/python/plano.py new file mode 120000 index 0000000..56374f0 --- /dev/null +++ b/subrepos/skewer/python/plano.py @@ -0,0 +1 @@ +../subrepos/plano/python/plano.py \ No newline at end of file diff --git a/subrepos/skewer/python/skewer.py b/subrepos/skewer/python/skewer.py new file mode 100644 index 0000000..92cef10 --- /dev/null +++ b/subrepos/skewer/python/skewer.py @@ -0,0 +1,400 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * + +import yaml as _yaml + +class _StringCatalog(dict): + def __init__(self, path): + super(_StringCatalog, self).__init__() + + self.path = "{0}.strings".format(split_extension(path)[0]) + + check_file(self.path) + + key = None + out = list() + + for line in read_lines(self.path): + line = line.rstrip() + + if line.startswith("[") and line.endswith("]"): + if key: + self[key] = "".join(out).strip() + + out = list() + key = line[1:-1] + + continue + + out.append(line) + out.append("\n") + + self[key] = "".join(out).strip() + + def __repr__(self): + return format_repr(self) + +_strings = _StringCatalog(__file__) + +_standard_steps = { + "configure_separate_console_sessions": { + "title": "Configure separate console sessions", + "preamble": _strings["configure_separate_console_sessions_preamble"], + "commands": [ + {"run": "export KUBECONFIG=~/.kube/config-@namespace@"} + ], + }, + "access_your_clusters": { + "title": "Access your clusters", + "preamble": _strings["access_your_clusters_preamble"], + }, + "set_up_your_namespaces": { + "title": "Set up your namespaces", + "preamble": _strings["set_up_your_namespaces_preamble"], + "commands": [ + {"run": "kubectl create namespace @namespace@"}, + {"run": "kubectl config set-context --current --namespace @namespace@"}, + ], + }, + "install_skupper_in_your_namespaces": { + "title": "Install Skupper in your namespaces", + "preamble": _strings["install_skupper_in_your_namespaces_preamble"], + "commands": [ + { + "run": "skupper init", + "await": ["deployment/skupper-service-controller", "deployment/skupper-router"], + } + ], + "postamble": _strings["install_skupper_in_your_namespaces_postamble"], + }, + "check_the_status_of_your_namespaces": { + "title": "Check the status of your namespaces", + "preamble": _strings["check_the_status_of_your_namespaces_preamble"], + "commands": [ + {"run": "skupper status"} + ], + "postamble": _strings["check_the_status_of_your_namespaces_postamble"], + }, +} + +def _string_loader(loader, node): + return _strings[node.value] + +_yaml.SafeLoader.add_constructor("!string", _string_loader) + +def check_environment(): + check_program("kubectl") + check_program("skupper") + +# Eventually Kubernetes will make this nicer: +# https://github.com/kubernetes/kubernetes/pull/87399 +# https://github.com/kubernetes/kubernetes/issues/80828 +# https://github.com/kubernetes/kubernetes/issues/83094 +def await_resource(group, name, namespace=None): + base_command = "kubectl" + + if namespace is not None: + base_command = f"{base_command} -n {namespace}" + + notice(f"Waiting for {group}/{name} to become available") + + for i in range(180): + sleep(1) + + if run(f"{base_command} get {group}/{name}", check=False).exit_code == 0: + break + else: + fail(f"Timed out waiting for {group}/{name}") + + if group == "deployment": + try: + run(f"{base_command} wait --for condition=available --timeout 180s {group}/{name}") + except: + run(f"{base_command} logs {group}/{name}") + raise + +def await_external_ip(group, name, namespace=None): + await_resource(group, name, namespace=namespace) + + base_command = "kubectl" + + if namespace is not None: + base_command = f"{base_command} -n {namespace}" + + for i in range(180): + sleep(1) + + if call(f"{base_command} get {group}/{name} -o jsonpath='{{.status.loadBalancer.ingress}}'") != "": + break + else: + fail(f"Timed out waiting for external IP for {group}/{name}") + +def run_steps_on_minikube(skewer_file): + with open(skewer_file) as file: + skewer_data = _yaml.safe_load(file) + + _apply_standard_steps(skewer_data) + + work_dir = make_temp_dir() + + try: + run(f"minikube -p skewer start") + + for name, value in skewer_data["sites"].items(): + kubeconfig = value["kubeconfig"].replace("~", work_dir) + + with working_env(KUBECONFIG=kubeconfig): + run(f"minikube -p skewer update-context") + check_file(ENV["KUBECONFIG"]) + + with open("/tmp/minikube-tunnel-output", "w") as tunnel_output_file: + with start(f"minikube -p skewer tunnel", output=tunnel_output_file): + _run_steps(work_dir, skewer_data) + finally: + run(f"minikube -p skewer delete") + +def run_steps_external(skewer_file, **kubeconfigs): + with open(skewer_file) as file: + skewer_data = _yaml.safe_load(file) + + _apply_standard_steps(skewer_data) + + work_dir = make_temp_dir() + + for name, kubeconfig in kubeconfigs.items(): + skewer_data["sites"][name]["kubeconfig"] = kubeconfig + + _run_steps(work_dir, skewer_data) + +def _run_steps(work_dir, skewer_data): + sites = skewer_data["sites"] + + for step_data in skewer_data["steps"]: + _run_step(work_dir, skewer_data, step_data) + + if "cleaning_up" in skewer_data: + _run_step(work_dir, skewer_data, skewer_data["cleaning_up"]) + +def _run_step(work_dir, skewer_data, step_data): + if "commands" not in step_data: + return + + if "title" in step_data: + notice("Running step '{}'", step_data["title"]) + + try: + items = step_data["commands"].items() + except AttributeError: + items = list() + + for context_name in skewer_data["sites"]: + items.append((context_name, step_data["commands"])) + + for context_name, commands in items: + kubeconfig = skewer_data["sites"][context_name]["kubeconfig"].replace("~", work_dir) + + with working_env(KUBECONFIG=kubeconfig): + for command in commands: + run(command["run"].replace("~", work_dir), shell=True) + + if "await" in command: + for resource in command["await"]: + group, name = resource.split("/", 1) + await_resource(group, name) + + if "await_external_ip" in command: + for resource in command["await_external_ip"]: + group, name = resource.split("/", 1) + await_external_ip(group, name) + + if "sleep" in command: + sleep(command["sleep"]) + +def generate_readme(skewer_file, output_file): + with open(skewer_file) as file: + skewer_data = _yaml.safe_load(file) + + out = list() + + out.append(f"# {skewer_data['title']}") + out.append("") + + if "github_actions_url" in skewer_data: + url = skewer_data["github_actions_url"] + out.append(f"[![main]({url}/badge.svg)]({url})") + out.append("") + + if "subtitle" in skewer_data: + out.append(f"#### {skewer_data['subtitle']}") + out.append("") + + out.append(_strings["example_suite_para"]) + out.append("") + out.append("#### Contents") + out.append("") + + if "overview" in skewer_data: + out.append("* [Overview](#overview)") + + if "prerequisites" in skewer_data: + out.append("* [Prerequisites](#prerequisites)") + + _apply_standard_steps(skewer_data) + + for i, step_data in enumerate(skewer_data["steps"], 1): + title = f"Step {i}: {step_data['title']}" + + fragment = replace(title, " ", "_") + fragment = replace(fragment, r"[\W]", "") + fragment = replace(fragment, "_", "-") + fragment = fragment.lower() + + out.append(f"* [{title}](#{fragment})") + + if "summary" in skewer_data: + out.append("* [Summary](#summary)") + + if "cleaning_up" in skewer_data: + out.append("* [Cleaning up](#cleaning-up)") + + if "next_steps" in skewer_data: + out.append("* [Next steps](#next-steps)") + + out.append("") + + if "overview" in skewer_data: + out.append("## Overview") + out.append("") + out.append(skewer_data["overview"].strip()) + out.append("") + + if "prerequisites" in skewer_data: + out.append("## Prerequisites") + out.append("") + out.append(skewer_data["prerequisites"].strip()) + out.append("") + + for i, step_data in enumerate(skewer_data["steps"], 1): + out.append(f"## Step {i}: {step_data['title']}") + out.append("") + out.append(_generate_readme_step(skewer_data, step_data)) + out.append("") + + if "summary" in skewer_data: + out.append("## Summary") + out.append("") + out.append(skewer_data["summary"].strip()) + out.append("") + + if "cleaning_up" in skewer_data: + out.append("## Cleaning up") + out.append("") + out.append(_generate_readme_step(skewer_data, skewer_data["cleaning_up"])) + out.append("") + + if "next_steps" in skewer_data: + out.append("## Next steps") + out.append("") + out.append(skewer_data["next_steps"].strip()) + + write(output_file, "\n".join(out).strip() + "\n") + +def _generate_readme_step(skewer_data, step_data): + out = list() + + if "preamble" in step_data: + out.append(step_data["preamble"].strip()) + out.append("") + + if "commands" in step_data: + try: + items = step_data["commands"].items() + except AttributeError: + items = ((None, step_data["commands"]),) + + for context_name, commands in items: + outputs = list() + + if context_name: + namespace = skewer_data["sites"][context_name]["namespace"] + out.append(f"Console for _{namespace}_:") + out.append("") + else: + out.append("Console:") + out.append("") + + out.append("~~~ shell") + + for command in commands: + out.append(command["run"]) + + if "output" in command: + outputs.append((command["run"], command["output"])) + + out.append("~~~") + out.append("") + + if outputs: + out.append("Sample output:") + out.append("") + out.append("~~~") + + if len(outputs) > 1: + out.append("\n\n".join((f"$ {run}\n{output.strip()}" for run, output in outputs))) + else: + out.append(outputs[0][1].strip()) + + out.append("~~~") + out.append("") + + if "postamble" in step_data: + out.append(step_data["postamble"].strip()) + + return "\n".join(out).strip() + +def _apply_standard_steps(skewer_data): + for step_data in skewer_data["steps"]: + if "standard" not in step_data: + continue + + standard_step_data = _standard_steps[step_data["standard"]] + + step_data["title"] = standard_step_data["title"] + + if "preamble" in standard_step_data: + step_data["preamble"] = standard_step_data["preamble"] + + if "postamble" in standard_step_data: + step_data["postamble"] = standard_step_data["postamble"] + + if "commands" in standard_step_data: + step_data["commands"] = dict() + + for namespace, context_data in skewer_data["sites"].items(): + resolved_commands = list() + + for command in standard_step_data["commands"]: + resolved_command = dict(command) + resolved_command["run"] = command["run"].replace("@namespace@", namespace) + + resolved_commands.append(resolved_command) + + step_data["commands"][namespace] = resolved_commands diff --git a/subrepos/skewer/python/skewer.strings b/subrepos/skewer/python/skewer.strings new file mode 100644 index 0000000..c096532 --- /dev/null +++ b/subrepos/skewer/python/skewer.strings @@ -0,0 +1,127 @@ +[example_suite_para] + +This example is part of a [suite of examples][examples] showing the +different ways you can use [Skupper][website] to connect services +across cloud providers, data centers, and edge sites. + +[website]: https://skupper.io/ +[examples]: https://skupper.io/examples/index.html + +[prerequisites] + +* The `kubectl` command-line tool, version 1.15 or later + ([installation guide][install-kubectl]) + +* The `skupper` command-line tool, the latest version ([installation + guide][install-skupper]) + +* Access to at least one Kubernetes cluster, from any provider you + choose + +[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[install-skupper]: https://skupper.io/install/index.html + +[configure_separate_console_sessions_preamble] + +Skupper is designed for use with multiple namespaces, typically on +different clusters. The `skupper` command uses your +[kubeconfig][kubeconfig] and current context to select the namespace +where it operates. + +[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ + +Your kubeconfig is stored in a file in your home directory. The +`skupper` and `kubectl` commands use the `KUBECONFIG` environment +variable to locate it. + +A single kubeconfig supports only one active context per user. +Since you will be using multiple contexts at once in this +exercise, you need to create distinct kubeconfigs. + +Start a console session for each of your namespaces. Set the +`KUBECONFIG` environment variable to a different path in each +session. + +[access_your_clusters_preamble] + +The methods for accessing your clusters vary by Kubernetes provider. +Find the instructions for your chosen providers and use them to +authenticate and configure access for each console session. See the +following links for more information: + +* [Minikube](https://skupper.io/start/minikube.html) +* [Amazon Elastic Kubernetes Service (EKS)](https://skupper.io/start/eks.html) +* [Azure Kubernetes Service (AKS)](https://skupper.io/start/aks.html) +* [Google Kubernetes Engine (GKE)](https://skupper.io/start/gke.html) +* [IBM Kubernetes Service](https://skupper.io/start/ibmks.html) +* [OpenShift](https://skupper.io/start/openshift.html) +* [More providers](https://kubernetes.io/partners/#kcsp) + +[set_up_your_namespaces_preamble] + +Use `kubectl create namespace` to create the namespaces you wish to +use (or use existing namespaces). Use `kubectl config set-context` to +set the current namespace for each session. + +[install_skupper_in_your_namespaces_preamble] + +The `skupper init` command installs the Skupper router and service +controller in the current namespace. Run the `skupper init` command +in each namespace. + +**Note:** If you are using Minikube, [you need to start `minikube +tunnel`][minikube-tunnel] before you install Skupper. + +[minikube-tunnel]: https://skupper.io/start/minikube.html#running-minikube-tunnel + +[install_skupper_in_your_namespaces_postamble] + +[check_the_status_of_your_namespaces_preamble] + +Use `skupper status` in each console to check that Skupper is +installed. + +[check_the_status_of_your_namespaces_postamble] + +You should see output like this for each namespace: + +~~~ +Skupper is enabled for namespace "" in interior mode. It is not connected to any other sites. It has no exposed services. +The site console url is: http://
:8080 +The credentials for internal console-auth mode are held in secret: 'skupper-console-users' +~~~ + +As you move through the steps below, you can use `skupper status` at +any time to check your progress. + +[link_your_namespaces_preamble] + +Creating a link requires use of two `skupper` commands in conjunction, +`skupper token create` and `skupper link create`. + +The `skupper token create` command generates a secret token that +signifies permission to create a link. The token also carries the +link details. Then, in a remote namespace, The `skupper link create` +command uses the token to create a link to the namespace that +generated it. + +**Note:** The link token is truly a *secret*. Anyone who has the +token can link to your namespace. Make sure that only those you trust +have access to it. + +First, use `skupper token create` in one namespace to generate the +token. Then, use `skupper link create` in the other to create a link. + +[link_your_namespaces_postamble] + +If your console sessions are on different machines, you may need to +use `scp` or a similar tool to transfer the token. + +[cleaning_up_preamble] + +To remove Skupper and the other resources from this exercise, use the +following commands. + +[next_steps] + +Check out the other [examples][examples] on the Skupper website. diff --git a/subrepos/skewer/subrepos/plano/.github/workflows/main.yaml b/subrepos/skewer/subrepos/plano/.github/workflows/main.yaml new file mode 100644 index 0000000..789ed09 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/.github/workflows/main.yaml @@ -0,0 +1,12 @@ +name: main +on: [push, pull_request] +jobs: + test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - uses: actions/setup-python@v2 + - run: make test + - run: make install + - run: echo "$HOME/.local/bin" >> $GITHUB_PATH + - run: plano-self-test diff --git a/subrepos/skewer/subrepos/plano/.gitignore b/subrepos/skewer/subrepos/plano/.gitignore new file mode 100644 index 0000000..8b940c3 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/.gitignore @@ -0,0 +1,7 @@ +*.pyc +__pycache__/ +/build +/dist +/.coverage +/htmlcov +test-project/build diff --git a/subrepos/skewer/subrepos/plano/.gitrepo b/subrepos/skewer/subrepos/plano/.gitrepo new file mode 100644 index 0000000..e540947 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/.gitrepo @@ -0,0 +1,12 @@ +; DO NOT EDIT (unless you know what you are doing) +; +; This subdirectory is a git "subrepo", and this file is maintained by the +; git-subrepo command. See https://github.com/git-commands/git-subrepo#readme +; +[subrepo] + remote = git@github.com:ssorj/plano.git + branch = main + commit = d6694e47baf9d4af63510d90eb07ddb48c2f4010 + parent = aa48e75ba56a2902ce6345d51f3fcf7493715471 + method = merge + cmdver = 0.4.3 diff --git a/subrepos/skewer/subrepos/plano/.travis.yml b/subrepos/skewer/subrepos/plano/.travis.yml new file mode 100644 index 0000000..025e86e --- /dev/null +++ b/subrepos/skewer/subrepos/plano/.travis.yml @@ -0,0 +1,9 @@ +services: + - docker + +before_install: + - sudo apt-get update -qq + - sudo apt-get install -qq -y make python + +script: + - make big-test DOCKER_COMMAND=docker diff --git a/subrepos/skewer/subrepos/plano/LICENSE.txt b/subrepos/skewer/subrepos/plano/LICENSE.txt new file mode 100644 index 0000000..e06d208 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/LICENSE.txt @@ -0,0 +1,202 @@ +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/subrepos/skewer/subrepos/plano/Makefile b/subrepos/skewer/subrepos/plano/Makefile new file mode 100644 index 0000000..cd86d8f --- /dev/null +++ b/subrepos/skewer/subrepos/plano/Makefile @@ -0,0 +1,106 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +.NOTPARALLEL: + +export PYTHONPATH := python:${PYTHONPATH} + +DESTDIR := / +PREFIX := ${HOME}/.local +DOCKER_COMMAND := podman + +.PHONY: default +default: build + +.PHONY: help +help: + @echo "build Build the code" + @echo "test Run the tests" + @echo "install Install the code" + @echo "clean Remove transient files from the checkout" + +.PHONY: build +build: + ./setup.py build + ./setup.py check + +.PHONY: install +install: clean + ./setup.py install --root ${DESTDIR} --prefix ${PREFIX} + +.PHONY: docs +docs: + mkdir -p build + sphinx-build -M html docs build/docs + +.PHONY: clean +clean: + find . -type f -name \*.pyc -delete + find . -type d -name __pycache__ -exec rm -rf \{} + + rm -rf build dist htmlcov .coverage test-project/build + +.PHONY: test +test: clean build + python3 scripts/test + $$(type -P python2) && python2 scripts/test || : + +.PHONY: big-test +big-test: test test-centos-8 test-centos-7 test-fedora test-ubuntu + +.PHONY: test-centos-8 +test-centos-8: + ${DOCKER_COMMAND} build -f scripts/test-centos-8.dockerfile -t plano-test-centos-8 . + ${DOCKER_COMMAND} run --rm plano-test-centos-8 + +.PHONY: test-centos-7 +test-centos-7: + ${DOCKER_COMMAND} build -f scripts/test-centos-7.dockerfile -t plano-test-centos-7 . + ${DOCKER_COMMAND} run --rm plano-test-centos-7 + +.PHONY: test-centos-6 +test-centos-6: + ${DOCKER_COMMAND} build -f scripts/test-centos-6.dockerfile -t plano-test-centos-6 . + ${DOCKER_COMMAND} run --rm plano-test-centos-6 + +.PHONY: test-fedora +test-fedora: + ${DOCKER_COMMAND} build -f scripts/test-fedora.dockerfile -t plano-test-fedora . + ${DOCKER_COMMAND} run --rm plano-test-fedora + +.PHONY: test-ubuntu +test-ubuntu: + ${DOCKER_COMMAND} build -f scripts/test-ubuntu.dockerfile -t plano-test-ubuntu . + ${DOCKER_COMMAND} run --rm plano-test-ubuntu + +.PHONY: test-bootstrap +test-bootstrap: + ${DOCKER_COMMAND} build -f scripts/test-bootstrap.dockerfile -t plano-test-bootstrap . + ${DOCKER_COMMAND} run --rm plano-test-bootstrap + +.PHONY: debug-bootstrap +debug-bootstrap: + ${DOCKER_COMMAND} build -f scripts/test-bootstrap.dockerfile -t plano-test-bootstrap . + ${DOCKER_COMMAND} run --rm -it plano-test-bootstrap /bin/bash + +.PHONY: coverage +coverage: + coverage3 run --omit /tmp/\* scripts/test + coverage3 report + coverage3 html + @echo file:${CURDIR}/htmlcov/index.html diff --git a/subrepos/skewer/subrepos/plano/README.md b/subrepos/skewer/subrepos/plano/README.md new file mode 100644 index 0000000..7857aeb --- /dev/null +++ b/subrepos/skewer/subrepos/plano/README.md @@ -0,0 +1,13 @@ +# Plano + +[![main](https://github.com/ssorj/plano/workflows/main/badge.svg)](https://github.com/ssorj/plano/actions?query=workflow%3Amain) + +Python functions for writing shell-style system scripts. + +## Dependencies + + - curl + - make + - python + - tar + - findutils diff --git a/subrepos/skewer/subrepos/plano/bin/plano b/subrepos/skewer/subrepos/plano/bin/plano new file mode 100755 index 0000000..1b57e95 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/bin/plano @@ -0,0 +1,30 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import os +import sys + +if os.path.isdir("python"): + sys.path.insert(0, "python") + +from plano import PlanoCommand + +if __name__ == "__main__": + PlanoCommand().main() diff --git a/subrepos/skewer/subrepos/plano/bin/plano-self-test.in b/subrepos/skewer/subrepos/plano/bin/plano-self-test.in new file mode 100755 index 0000000..da54368 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/bin/plano-self-test.in @@ -0,0 +1,37 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import os +import sys + +from plano import PlanoTestCommand, PYTHON3 + +home = os.environ.get("PLANO_HOME", "@default_home@") +sys.path.insert(0, os.path.join(home, "python")) + +if __name__ == "__main__": + import plano_tests + test_modules = [plano_tests] + + if PYTHON3: + import bullseye_tests + test_modules.append(bullseye_tests) + + PlanoTestCommand(test_modules).main() diff --git a/subrepos/skewer/subrepos/plano/bin/planosh b/subrepos/skewer/subrepos/plano/bin/planosh new file mode 100755 index 0000000..a9cde4a --- /dev/null +++ b/subrepos/skewer/subrepos/plano/bin/planosh @@ -0,0 +1,30 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import os +import sys + +if os.path.isdir("python"): + sys.path.insert(0, "python") + +from plano import PlanoShellCommand + +if __name__ == "__main__": + PlanoShellCommand().main() diff --git a/subrepos/skewer/subrepos/plano/bin/planotest b/subrepos/skewer/subrepos/plano/bin/planotest new file mode 100755 index 0000000..d27b087 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/bin/planotest @@ -0,0 +1,30 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import os +import sys + +if os.path.isdir("python"): + sys.path.insert(0, "python") + +from plano import PlanoTestCommand + +if __name__ == "__main__": + PlanoTestCommand().main() diff --git a/subrepos/skewer/subrepos/plano/docs/conf.py b/subrepos/skewer/subrepos/plano/docs/conf.py new file mode 100644 index 0000000..3277b1e --- /dev/null +++ b/subrepos/skewer/subrepos/plano/docs/conf.py @@ -0,0 +1,34 @@ +# import os +# import sys + +# sys.path.insert(0, os.path.abspath("../python")) + +extensions = [ + "sphinx.ext.autodoc", +] + +# autodoc_member_order = "bysource" +# autodoc_default_flags = ["members", "undoc-members", "inherited-members"] + +autodoc_default_options = { + "members": True, + "member-order": "bysource", + "undoc-members": True, + "imported-members": True, + "exclude-members": "PlanoProcess", +} + +master_doc = "index" +project = u"Plano" +copyright = u"1975" +author = u"Justin Ross" + +version = u"0.1.0" +release = u"" + +pygments_style = "sphinx" +html_theme = "nature" + +html_theme_options = { + "nosidebar": True, +} diff --git a/subrepos/skewer/subrepos/plano/docs/index.rst b/subrepos/skewer/subrepos/plano/docs/index.rst new file mode 100644 index 0000000..7441b03 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/docs/index.rst @@ -0,0 +1,4 @@ +Plano +===== + +.. automodule:: plano diff --git a/subrepos/skewer/subrepos/plano/python/bullseye.py b/subrepos/skewer/subrepos/plano/python/bullseye.py new file mode 100644 index 0000000..f27a120 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/python/bullseye.py @@ -0,0 +1,308 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from __future__ import print_function + +import collections as _collections +import fnmatch as _fnmatch +import os as _os +import shutil as _shutil +import sys as _sys + +from plano import * + +class _Project: + def __init__(self): + self.name = None + self.source_dir = "python" + self.included_modules = ["*"] + self.excluded_modules = ["plano", "bullseye"] + self.data_dirs = [] + self.build_dir = "build" + self.test_modules = [] + +project = _Project() + +_default_prefix = join(get_home_dir(), ".local") + +def check_project(): + assert project.name + assert project.source_dir + assert project.build_dir + +class project_env(working_env): + def __init__(self): + check_project() + + home_var = "{0}_HOME".format(project.name.upper().replace("-", "_")) + + env = { + home_var: get_absolute_path(join(project.build_dir, project.name)), + "PATH": get_absolute_path(join(project.build_dir, "bin")) + ":" + ENV["PATH"], + "PYTHONPATH": get_absolute_path(join(project.build_dir, project.name, project.source_dir)), + } + + super(project_env, self).__init__(**env) + +def configure_file(input_file, output_file, substitutions, quiet=False): + notice("Configuring '{0}' for output '{1}'", input_file, output_file) + + content = read(input_file) + + for name, value in substitutions.items(): + content = content.replace("@{0}@".format(name), value) + + write(output_file, content) + + _shutil.copymode(input_file, output_file) + + return output_file + +_prefix_arg = CommandArgument("prefix", help="The base path for installed files", default=_default_prefix) +_clean_arg = CommandArgument("clean_", help="Clean before starting", display_name="clean") +_verbose_arg = CommandArgument("verbose", help="Print detailed logging to the console") + +@command(args=(_prefix_arg, _clean_arg)) +def build(app, prefix=None, clean_=False): + check_project() + + if clean_: + clean(app) + + build_file = join(project.build_dir, "build.json") + build_data = {} + + if exists(build_file): + build_data = read_json(build_file) + + mtime = _os.stat(project.source_dir).st_mtime + + for path in find(project.source_dir): + mtime = max(mtime, _os.stat(path).st_mtime) + + if prefix is None: + prefix = build_data.get("prefix", _default_prefix) + + new_build_data = {"prefix": prefix, "mtime": mtime} + + debug("Existing build data: {0}", pformat(build_data)) + debug("New build data: {0}", pformat(new_build_data)) + + if build_data == new_build_data: + debug("Already built") + return + + write_json(build_file, new_build_data) + + default_home = join(prefix, "lib", project.name) + + for path in find("bin", "*.in"): + configure_file(path, join(project.build_dir, path[:-3]), {"default_home": default_home}) + + for path in find("bin", exclude="*.in"): + copy(path, join(project.build_dir, path), inside=False, symlinks=False) + + for path in find(project.source_dir, "*.py"): + module_name = get_name_stem(path) + included = any([_fnmatch.fnmatchcase(module_name, x) for x in project.included_modules]) + excluded = any([_fnmatch.fnmatchcase(module_name, x) for x in project.excluded_modules]) + + if included and not excluded: + copy(path, join(project.build_dir, project.name, path), inside=False, symlinks=False) + + for dir_name in project.data_dirs: + for path in find(dir_name): + copy(path, join(project.build_dir, project.name, path), inside=False, symlinks=False) + +@command(args=(CommandArgument("include", help="Run only tests with names matching PATTERN", metavar="PATTERN"), + CommandArgument("exclude", help="Do not run tests with names matching PATTERN", metavar="PATTERN"), + CommandArgument("enable", help="Enable disabled tests matching PATTERN", metavar="PATTERN"), + CommandArgument("list_", help="Print the test names and exit", display_name="list"), + _verbose_arg, _clean_arg)) +def test(app, include="*", exclude=None, enable=None, list_=False, verbose=False, clean_=False): + check_project() + + if clean_: + clean(app) + + if not list_: + build(app) + + with project_env(): + from plano import _import_module + modules = [_import_module(x) for x in project.test_modules] + + if not modules: # pragma: nocover + notice("No tests found") + return + + args = list() + + if list_: + print_tests(modules) + return + + exclude = nvl(exclude, ()) + enable = nvl(enable, ()) + + run_tests(modules, include=include, exclude=exclude, enable=enable, verbose=verbose) + +@command(args=(CommandArgument("staging_dir", help="A path prepended to installed files"), + _prefix_arg, _clean_arg)) +def install(app, staging_dir="", prefix=None, clean_=False): + check_project() + + build(app, prefix=prefix, clean_=clean_) + + assert is_dir(project.build_dir), list_dir() + + build_file = join(project.build_dir, "build.json") + build_data = read_json(build_file) + build_prefix = project.build_dir + "/" + install_prefix = staging_dir + build_data["prefix"] + + for path in find(join(project.build_dir, "bin")): + copy(path, join(install_prefix, remove_prefix(path, build_prefix)), inside=False, symlinks=False) + + for path in find(join(project.build_dir, project.name)): + copy(path, join(install_prefix, "lib", remove_prefix(path, build_prefix)), inside=False, symlinks=False) + +@command +def clean(app): + check_project() + + remove(project.build_dir) + remove(find(".", "__pycache__")) + remove(find(".", "*.pyc")) + +@command(args=(CommandArgument("undo", help="Generate settings that restore the previous environment"),)) +def env(app, undo=False): + """ + Generate shell settings for the project environment + + To apply the settings, source the output from your shell: + + $ source <(plano env) + """ + + check_project() + + project_dir = get_current_dir() # XXX Needs some checking + home_var = "{0}_HOME".format(project.name.upper().replace("-", "_")) + old_home_var = "OLD_{0}".format(home_var) + home_dir = join(project_dir, project.build_dir, project.name) + + if undo: + print("[[ ${0} ]] && export {1}=${2} && unset {3}".format(old_home_var, home_var, old_home_var, old_home_var)) + print("[[ $OLD_PATH ]] && export PATH=$OLD_PATH && unset OLD_PATH") + print("[[ $OLD_PYTHONPATH ]] && export PYTHONPATH=$OLD_PYTHONPATH && unset OLD_PYTHONPATH") + + return + + print("[[ ${0} ]] && export {1}=${2}".format(home_var, old_home_var, home_var)) + print("[[ $PATH ]] && export OLD_PATH=$PATH") + print("[[ $PYTHONPATH ]] && export OLD_PYTHONPATH=$PYTHONPATH") + + print("export {0}={1}".format(home_var, home_dir)) + + path = [ + join(project_dir, project.build_dir, "bin"), + ENV.get("PATH", ""), + ] + + print("export PATH={0}".format(join_path_var(*path))) + + python_path = [ + join(home_dir, project.source_dir), + join(project_dir, project.source_dir), + ENV.get("PYTHONPATH", ""), + ] + + print("export PYTHONPATH={0}".format(join_path_var(*python_path))) + +@command(args=(CommandArgument("filename", help="Which file to generate"), + CommandArgument("stdout", help="Print to stdout instead of writing the file directly"))) +def generate(app, filename, stdout=False): + """ + Generate standard project files + + Use one of the following filenames: + + .gitignore + LICENSE.txt + README.md + VERSION.txt + + Use the special filename "all" to generate all of them. + """ + + assert project.name + + project_files = _StringCatalog(__file__) + + if filename == "all": + for name in project_files: + _generate_file(project_files, name, stdout) + else: + _generate_file(project_files, filename, stdout) + +def _generate_file(project_files, filename, stdout): + try: + content = project_files[filename] + except KeyError: + exit("File {0} is not one of the options".format(repr(filename))) + + content = content.lstrip() + content = content.format(project_title=project.name.capitalize(), project_name=project.name) + + if stdout: + print(content, end="") + else: + write(filename, content) + +class _StringCatalog(dict): + def __init__(self, path): + super(_StringCatalog, self).__init__() + + self.path = "{0}.strings".format(split_extension(path)[0]) + + check_file(self.path) + + key = None + out = list() + + for line in read_lines(self.path): + line = line.rstrip() + + if line.startswith("[") and line.endswith("]"): + if key: + self[key] = "".join(out).strip() + "\n" + + out = list() + key = line[1:-1] + + continue + + out.append(line) + out.append("\r\n") + + self[key] = "".join(out).strip() + "\n" + + def __repr__(self): + return format_repr(self) diff --git a/subrepos/skewer/subrepos/plano/python/bullseye.strings b/subrepos/skewer/subrepos/plano/python/bullseye.strings new file mode 100644 index 0000000..2ad9f78 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/python/bullseye.strings @@ -0,0 +1,221 @@ +[.gitignore] +*.pyc +__pycache__/ +/build + +[README.md] +# {project_title} + +[![main](https://github.com/ssorj/{project_name}/workflows/main/badge.svg)](https://github.com/ssorj/{project_name}/actions?query=workflow%3Amain) + +## Project commands + +You can use the `./plano` command in the root of the project to +perform project tasks. It accepts a subcommand. Use `./plano --help` +to list the available commands. + +[LICENSE.txt] +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{{}}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {{yyyy}} {{name of copyright owner}} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +[VERSION.txt] +0.1.0-SNAPSHOT diff --git a/subrepos/skewer/subrepos/plano/python/bullseye_tests.py b/subrepos/skewer/subrepos/plano/python/bullseye_tests.py new file mode 100644 index 0000000..4c8a0e1 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/python/bullseye_tests.py @@ -0,0 +1,145 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from bullseye import * +from plano import * + +from bullseye import test as test_command + +test_project_dir = join(get_parent_dir(get_parent_dir(__file__)), "test-project") +result_file = "build/result.json" + +class test_project(working_dir): + def __enter__(self): + dir = super(test_project, self).__enter__() + copy(test_project_dir, ".", inside=False) + return dir + +def run_plano(*args): + PlanoCommand().main(["-f", join(test_project_dir, "Planofile")] + list(args)) + +@test +def project_operations(): + project.name = "alphabet" + + with project_env(): + assert "ALPHABET_HOME" in ENV, ENV + + with working_dir(): + input_file = write("zeta-file", "X@replace-me@X") + output_file = configure_file(input_file, "zeta-file", {"replace-me": "Y"}) + output = read(output_file) + assert output == "XYX", output + +@test +def build_command(): + with test_project(): + run_plano("build") + + result = read_json(result_file) + assert result["built"], result + + check_file("build/bin/chucker") + check_file("build/bin/chucker-test") + check_file("build/chucker/python/chucker.py") + check_file("build/chucker/python/chucker_tests.py") + + result = read("build/bin/chucker").strip() + assert result.endswith(".local/lib/chucker"), result + + result = read_json("build/build.json") + assert result["prefix"].endswith(".local"), result + + run_plano("build", "--clean", "--prefix", "/usr/local") + + result = read("build/bin/chucker").strip() + assert result == "/usr/local/lib/chucker", result + + result = read_json("build/build.json") + assert result["prefix"] == ("/usr/local"), result + +@test +def test_command(): + with test_project(): + run_plano("test") + + check_file(result_file) + + result = read_json(result_file) + assert result["tested"], result + + run_plano("test", "--verbose") + run_plano("test", "--list") + run_plano("test", "--include", "test_hello") + run_plano("test", "--clean") + +@test +def install_command(): + with test_project(): + run_plano("install", "--staging-dir", "staging") + + result = read_json(result_file) + assert result["installed"], result + + check_dir("staging") + + with test_project(): + assert not exists("build"), list_dir() + + run_plano("build", "--prefix", "/opt/local") + run_plano("install", "--staging-dir", "staging") + + check_dir("staging/opt/local") + +@test +def clean_command(): + with test_project(): + run_plano("build") + + check_dir("build") + + run_plano("clean") + + assert not is_dir("build") + +@test +def env_command(): + with test_project(): + run_plano("env") + run_plano("env", "--undo") + +@test +def generate_command(): + with test_project(): + run_plano("generate", "README.md") + + assert exists("README.md"), list_dir() + + run_plano("generate", "--stdout", "LICENSE.txt") + + assert not exists("LICENSE.txt"), list_dir() + + run_plano("generate", "all") + + assert exists(".gitignore"), list_dir() + assert exists("LICENSE.txt"), list_dir() + assert exists("VERSION.txt"), list_dir() + + with expect_system_exit(): + run_plano("generate", "no-such-file") diff --git a/subrepos/skewer/subrepos/plano/python/plano.py b/subrepos/skewer/subrepos/plano/python/plano.py new file mode 100644 index 0000000..95e51ee --- /dev/null +++ b/subrepos/skewer/subrepos/plano/python/plano.py @@ -0,0 +1,2324 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from __future__ import print_function + +import argparse as _argparse +import base64 as _base64 +import binascii as _binascii +import code as _code +import codecs as _codecs +import collections as _collections +import fnmatch as _fnmatch +import getpass as _getpass +import inspect as _inspect +import json as _json +import os as _os +import pprint as _pprint +import pkgutil as _pkgutil +import random as _random +import re as _re +import shlex as _shlex +import shutil as _shutil +import signal as _signal +import socket as _socket +import subprocess as _subprocess +import sys as _sys +import tempfile as _tempfile +import time as _time +import traceback as _traceback +import uuid as _uuid + +try: # pragma: nocover + import urllib.parse as _urlparse +except ImportError: # pragma: nocover + import urllib as _urlparse + +try: + import importlib as _importlib + + def _import_module(name): + return _importlib.import_module(name) +except ImportError: # pragma: nocover + def _import_module(name): + return __import__(name, fromlist=[""]) + +_max = max + +## Exceptions + +class PlanoException(Exception): + pass + +class PlanoError(PlanoException): + pass + +class PlanoTimeout(PlanoException): + pass + +class PlanoTestSkipped(Exception): + pass + +## Global variables + +ENV = _os.environ +ARGS = _sys.argv + +STDIN = _sys.stdin +STDOUT = _sys.stdout +STDERR = _sys.stderr +DEVNULL = _os.devnull + +PYTHON2 = _sys.version_info[0] == 2 +PYTHON3 = _sys.version_info[0] == 3 + +PLANO_DEBUG = "PLANO_DEBUG" in ENV + +## Archive operations + +def make_archive(input_dir, output_file=None, quiet=False): + """ + group: archive_operations + """ + + check_program("tar") + + archive_stem = get_base_name(input_dir) + + if output_file is None: + output_file = "{0}.tar.gz".format(join(get_current_dir(), archive_stem)) + + _log(quiet, "Making archive {0} from directory {1}", repr(output_file), repr(input_dir)) + + with working_dir(get_parent_dir(input_dir)): + run("tar -czf {0} {1}".format(output_file, archive_stem)) + + return output_file + +def extract_archive(input_file, output_dir=None, quiet=False): + check_program("tar") + + if output_dir is None: + output_dir = get_current_dir() + + _log(quiet, "Extracting archive {0} to directory {1}", repr(input_file), repr(output_dir)) + + input_file = get_absolute_path(input_file) + + with working_dir(output_dir): + run("tar -xf {0}".format(input_file)) + + return output_dir + +def rename_archive(input_file, new_archive_stem, quiet=False): + _log(quiet, "Renaming archive {0} with stem {1}", repr(input_file), repr(new_archive_stem)) + + output_dir = get_absolute_path(get_parent_dir(input_file)) + output_file = "{0}.tar.gz".format(join(output_dir, new_archive_stem)) + + input_file = get_absolute_path(input_file) + + with working_dir(): + extract_archive(input_file) + + input_name = list_dir()[0] + input_dir = move(input_name, new_archive_stem) + + make_archive(input_dir, output_file=output_file) + + remove(input_file) + + return output_file + +## Command operations + +class BaseCommand(object): + def main(self, args=None): + args = self.parse_args(args) + + assert args is None or isinstance(args, _argparse.Namespace), args + + self.verbose = args.verbose + self.quiet = args.quiet + self.init_only = args.init_only + + level = "notice" + + if self.verbose: + level = "debug" + + if self.quiet: + level = "error" + + with logging_enabled(level=level): + try: + self.init(args) + + if self.init_only: + return + + self.run() + except KeyboardInterrupt: + pass + except PlanoError as e: + if self.verbose: + _traceback.print_exc() + exit(1) + else: + exit(str(e)) + + def parse_args(self, args): # pragma: nocover + raise NotImplementedError() + + def init(self, args): # pragma: nocover + pass + + def run(self): # pragma: nocover + raise NotImplementedError() + +class BaseArgumentParser(_argparse.ArgumentParser): + def __init__(self, **kwargs): + super(BaseArgumentParser, self).__init__(**kwargs) + + self.allow_abbrev = False + self.formatter_class = _argparse.RawDescriptionHelpFormatter + + self.add_argument("--verbose", action="store_true", + help="Print detailed logging to the console") + self.add_argument("--quiet", action="store_true", + help="Print no logging to the console") + self.add_argument("--init-only", action="store_true", + help=_argparse.SUPPRESS) + + _capitalize_help(self) + +# Patch the default help text +def _capitalize_help(parser): + try: + for action in parser._actions: + if action.help and action.help is not _argparse.SUPPRESS: + action.help = capitalize(action.help) + except: # pragma: nocover + pass + +## Console operations + +def flush(): + _sys.stdout.flush() + _sys.stderr.flush() + +def eprint(*args, **kwargs): + print(*args, file=_sys.stderr, **kwargs) + +def pprint(*args, **kwargs): + args = [pformat(x) for x in args] + print(*args, **kwargs) + +_color_codes = { + "black": "\u001b[30", + "red": "\u001b[31", + "green": "\u001b[32", + "yellow": "\u001b[33", + "blue": "\u001b[34", + "magenta": "\u001b[35", + "cyan": "\u001b[36", + "white": "\u001b[37", +} + +_color_reset = "\u001b[0m" + +def _get_color_code(color, bright): + elems = [_color_codes[color]] + + if bright: + elems.append(";1") + + elems.append("m") + + return "".join(elems) + +def _is_color_enabled(file): + return PYTHON3 and hasattr(file, "isatty") and file.isatty() + +class console_color(object): + def __init__(self, color=None, bright=False, file=_sys.stdout): + self.file = file + self.color_code = None + + if (color, bright) != (None, False): + self.color_code = _get_color_code(color, bright) + + self.enabled = self.color_code is not None and _is_color_enabled(self.file) + + def __enter__(self): + if self.enabled: + print(self.color_code, file=self.file, end="", flush=True) + + def __exit__(self, exc_type, exc_value, traceback): + if self.enabled: + print(_color_reset, file=self.file, end="", flush=True) + +def cformat(value, color=None, bright=False, file=_sys.stdout): + if (color, bright) != (None, False) and _is_color_enabled(file): + return "".join((_get_color_code(color, bright), value, _color_reset)) + else: + return value + +def cprint(*args, **kwargs): + color = kwargs.pop("color", "white") + bright = kwargs.pop("bright", False) + file = kwargs.get("file", _sys.stdout) + + with console_color(color, bright=bright, file=file): + print(*args, **kwargs) + +class output_redirected(object): + def __init__(self, output, quiet=False): + self.output = output + self.quiet = quiet + + def __enter__(self): + flush() + + _log(self.quiet, "Redirecting output to file {0}", repr(self.output)) + + if is_string(self.output): + output = open(self.output, "w") + + self.prev_stdout, self.prev_stderr = _sys.stdout, _sys.stderr + _sys.stdout, _sys.stderr = output, output + + def __exit__(self, exc_type, exc_value, traceback): + flush() + + _sys.stdout, _sys.stderr = self.prev_stdout, self.prev_stderr + +try: + breakpoint +except NameError: # pragma: nocover + def breakpoint(): + import pdb + pdb.set_trace() + +def repl(vars): # pragma: nocover + _code.InteractiveConsole(locals=vars).interact() + +def print_properties(props, file=None): + size = max([len(x[0]) for x in props]) + + for prop in props: + name = "{0}:".format(prop[0]) + template = "{{0:<{0}}} ".format(size + 1) + + print(template.format(name), prop[1], end="", file=file) + + for value in prop[2:]: + print(" {0}".format(value), end="", file=file) + + print(file=file) + +## Directory operations + +def find(dirs=None, include="*", exclude=()): + if dirs is None: + dirs = "." + + if is_string(dirs): + dirs = (dirs,) + + if is_string(include): + include = (include,) + + if is_string(exclude): + exclude = (exclude,) + + found = set() + + for dir in dirs: + for root, dir_names, file_names in _os.walk(dir): + names = dir_names + file_names + + for include_pattern in include: + names = _fnmatch.filter(names, include_pattern) + + for exclude_pattern in exclude: + for name in _fnmatch.filter(names, exclude_pattern): + names.remove(name) + + if root.startswith("./"): + root = remove_prefix(root, "./") + elif root == ".": + root = "" + + found.update([join(root, x) for x in names]) + + return sorted(found) + +def make_dir(dir, quiet=False): + if dir == "": + return dir + + if not exists(dir): + _log(quiet, "Making directory '{0}'", dir) + _os.makedirs(dir) + + return dir + +def make_parent_dir(path, quiet=False): + return make_dir(get_parent_dir(path), quiet=quiet) + +# Returns the current working directory so you can change it back +def change_dir(dir, quiet=False): + _log(quiet, "Changing directory to {0}", repr(dir)) + + prev_dir = get_current_dir() + + if not dir: + return prev_dir + + _os.chdir(dir) + + return prev_dir + +def list_dir(dir=None, include="*", exclude=()): + if dir in (None, ""): + dir = get_current_dir() + + assert is_dir(dir) + + if is_string(include): + include = (include,) + + if is_string(exclude): + exclude = (exclude,) + + names = _os.listdir(dir) + + for include_pattern in include: + names = _fnmatch.filter(names, include_pattern) + + for exclude_pattern in exclude: + for name in _fnmatch.filter(names, exclude_pattern): + names.remove(name) + + return sorted(names) + +# No args constructor gets a temp dir +class working_dir(object): + def __init__(self, dir=None, quiet=False): + self.dir = dir + self.prev_dir = None + self.remove = False + self.quiet = quiet + + if self.dir is None: + self.dir = make_temp_dir() + self.remove = True + + def __enter__(self): + if self.dir == ".": + return + + _log(self.quiet, "Entering directory {0}", repr(get_absolute_path(self.dir))) + + make_dir(self.dir, quiet=True) + + self.prev_dir = change_dir(self.dir, quiet=True) + + return self.dir + + def __exit__(self, exc_type, exc_value, traceback): + if self.dir == ".": + return + + _log(self.quiet, "Returning to directory {0}", repr(get_absolute_path(self.prev_dir))) + + change_dir(self.prev_dir, quiet=True) + + if self.remove: + remove(self.dir, quiet=True) + +## Environment operations + +def join_path_var(*paths): + return _os.pathsep.join(unique(skip(paths))) + +def get_current_dir(): + return _os.getcwd() + +def get_home_dir(user=None): + return _os.path.expanduser("~{0}".format(user or "")) + +def get_user(): + return _getpass.getuser() + +def get_hostname(): + return _socket.gethostname() + +def get_program_name(command=None): + if command is None: + args = ARGS + else: + args = command.split() + + for arg in args: + if "=" not in arg: + return get_base_name(arg) + +def which(program_name): + assert "PATH" in _os.environ, _os.environ + + for dir in _os.environ["PATH"].split(_os.pathsep): + program = join(dir, program_name) + + if _os.access(program, _os.X_OK): + return program + +def check_env(var): + if var not in _os.environ: + raise PlanoError("Environment variable {0} is not set".format(repr(var))) + +def check_module(module): + if _pkgutil.find_loader(module) is None: + raise PlanoError("Module {0} is not found".format(repr(module))) + +def check_program(program): + if which(program) is None: + raise PlanoError("Program {0} is not found".format(repr(program))) + +class working_env(object): + def __init__(self, **vars): + self.amend = vars.pop("amend", True) + self.vars = vars + + def __enter__(self): + self.prev_vars = dict(_os.environ) + + if not self.amend: + for name, value in list(_os.environ.items()): + if name not in self.vars: + del _os.environ[name] + + for name, value in self.vars.items(): + _os.environ[name] = str(value) + + def __exit__(self, exc_type, exc_value, traceback): + for name, value in self.prev_vars.items(): + _os.environ[name] = value + + for name, value in self.vars.items(): + if name not in self.prev_vars: + del _os.environ[name] + +class working_module_path(object): + def __init__(self, path, amend=True): + if is_string(path): + if not is_absolute(path): + path = get_absolute_path(path) + + path = [path] + + if amend: + path = path + _sys.path + + self.path = path + + def __enter__(self): + self.prev_path = _sys.path + _sys.path = self.path + + def __exit__(self, exc_type, exc_value, traceback): + _sys.path = self.prev_path + +def print_env(file=None): + props = ( + ("ARGS", ARGS), + ("ENV['PATH']", ENV.get("PATH")), + ("ENV['PYTHONPATH']", ENV.get("PYTHONPATH")), + ("sys.executable", _sys.executable), + ("sys.path", _sys.path), + ("sys.version", _sys.version.replace("\n", "")), + ("get_current_dir()", get_current_dir()), + ("get_home_dir()", get_home_dir()), + ("get_hostname()", get_hostname()), + ("get_program_name()", get_program_name()), + ("get_user()", get_user()), + ("plano.__file__", __file__), + ("which('plano')", which("plano")), + ) + + print_properties(props, file=file) + +## File operations + +def touch(file, quiet=False): + _log(quiet, "Touching {0}", repr(file)) + + try: + _os.utime(file, None) + except OSError: + append(file, "") + + return file + +# symlinks=True - Preserve symlinks +# inside=True - Place from_path inside to_path if to_path is a directory +def copy(from_path, to_path, symlinks=True, inside=True, quiet=False): + _log(quiet, "Copying {0} to {1}", repr(from_path), repr(to_path)) + + if is_dir(to_path) and inside: + to_path = join(to_path, get_base_name(from_path)) + else: + make_parent_dir(to_path, quiet=True) + + if is_dir(from_path): + for name in list_dir(from_path): + copy(join(from_path, name), join(to_path, name), symlinks=symlinks, inside=False, quiet=True) + + _shutil.copystat(from_path, to_path) + elif is_link(from_path) and symlinks: + make_link(to_path, read_link(from_path), quiet=True) + else: + _shutil.copy2(from_path, to_path) + + return to_path + +# inside=True - Place from_path inside to_path if to_path is a directory +def move(from_path, to_path, inside=True, quiet=False): + _log(quiet, "Moving {0} to {1}", repr(from_path), repr(to_path)) + + to_path = copy(from_path, to_path, inside=inside, quiet=True) + remove(from_path, quiet=True) + + return to_path + +def remove(paths, quiet=False): + if is_string(paths): + paths = (paths,) + + for path in paths: + if not exists(path): + continue + + _log(quiet, "Removing {0}", repr(path)) + + if is_dir(path): + _shutil.rmtree(path, ignore_errors=True) + else: + _os.remove(path) + +def get_file_size(file): + return _os.path.getsize(file) + +## IO operations + +def read(file): + with _codecs.open(file, encoding="utf-8", mode="r") as f: + return f.read() + +def write(file, string): + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="w") as f: + f.write(string) + + return file + +def append(file, string): + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="a") as f: + f.write(string) + + return file + +def prepend(file, string): + orig = read(file) + return write(file, string + orig) + +def tail(file, count): + return "".join(tail_lines(file, count)) + +def read_lines(file): + with _codecs.open(file, encoding="utf-8", mode="r") as f: + return f.readlines() + +def write_lines(file, lines): + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="w") as f: + f.writelines(lines) + + return file + +def append_lines(file, lines): + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="a") as f: + f.writelines(lines) + + return file + +def prepend_lines(file, lines): + orig_lines = read_lines(file) + + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="w") as f: + f.writelines(lines) + f.writelines(orig_lines) + + return file + +def tail_lines(file, count): + assert count >= 0 + + with _codecs.open(file, encoding="utf-8", mode="r") as f: + pos = count + 1 + lines = list() + + while len(lines) <= count: + try: + f.seek(-pos, 2) + except IOError: + f.seek(0) + break + finally: + lines = f.readlines() + + pos *= 2 + + return lines[-count:] + +def replace_in_file(file, expr, replacement, count=0): + write(file, replace(read(file), expr, replacement, count=count)) + +## Iterable operations + +def unique(iterable): + return list(_collections.OrderedDict.fromkeys(iterable).keys()) + +def skip(iterable, values=(None, "", (), [], {})): + if is_scalar(values): + values = (values,) + + items = list() + + for item in iterable: + if item not in values: + items.append(item) + + return items + +## JSON operations + +def read_json(file): + with _codecs.open(file, encoding="utf-8", mode="r") as f: + return _json.load(f) + +def write_json(file, data): + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="w") as f: + _json.dump(data, f, indent=4, separators=(",", ": "), sort_keys=True) + + return file + +def parse_json(json): + return _json.loads(json) + +def emit_json(data): + return _json.dumps(data, indent=4, separators=(",", ": "), sort_keys=True) + +## HTTP operations + +def _run_curl(method, url, content=None, content_file=None, content_type=None, output_file=None, insecure=False): + check_program("curl") + + options = [ + "-sf", + "-X", method, + "-H", "'Expect:'", + ] + + if content is not None: + assert content_file is None + options.extend(("-d", "@-")) + + if content_file is not None: + assert content is None, content + options.extend(("-d", "@{0}".format(content_file))) + + if content_type is not None: + options.extend(("-H", "'Content-Type: {0}'".format(content_type))) + + if output_file is not None: + options.extend(("-o", output_file)) + + if insecure: + options.append("--insecure") + + options = " ".join(options) + command = "curl {0} {1}".format(options, url) + + if output_file is None: + return call(command, input=content) + else: + make_parent_dir(output_file, quiet=True) + run(command, input=content) + +def http_get(url, output_file=None, insecure=False): + return _run_curl("GET", url, output_file=output_file, insecure=insecure) + +def http_get_json(url, insecure=False): + return parse_json(http_get(url, insecure=insecure)) + +def http_put(url, content, content_type=None, insecure=False): + _run_curl("PUT", url, content=content, content_type=content_type, insecure=insecure) + +def http_put_file(url, content_file, content_type=None, insecure=False): + _run_curl("PUT", url, content_file=content_file, content_type=content_type, insecure=insecure) + +def http_put_json(url, data, insecure=False): + http_put(url, emit_json(data), content_type="application/json", insecure=insecure) + +def http_post(url, content, content_type=None, output_file=None, insecure=False): + return _run_curl("POST", url, content=content, content_type=content_type, output_file=output_file, insecure=insecure) + +def http_post_file(url, content_file, content_type=None, output_file=None, insecure=False): + return _run_curl("POST", url, content_file=content_file, content_type=content_type, output_file=output_file, insecure=insecure) + +def http_post_json(url, data, insecure=False): + return parse_json(http_post(url, emit_json(data), content_type="application/json", insecure=insecure)) + +## Link operations + +def make_link(path, linked_path, quiet=False): + _log(quiet, "Making link {0} to {1}", repr(path), repr(linked_path)) + + make_parent_dir(path, quiet=True) + remove(path, quiet=True) + + _os.symlink(linked_path, path) + + return path + +def read_link(path): + return _os.readlink(path) + +## Logging operations + +_logging_levels = ( + "debug", + "notice", + "warn", + "error", + "disabled", +) + +_debug = _logging_levels.index("debug") +_notice = _logging_levels.index("notice") +_warn = _logging_levels.index("warn") +_error = _logging_levels.index("error") +_disabled = _logging_levels.index("disabled") + +_logging_output = None +_logging_threshold = _notice + +def enable_logging(level="notice", output=None): + assert level in _logging_levels + + debug("Enabling logging (level={0}, output={1})", repr(level), repr(nvl(output, "stderr"))) + + global _logging_threshold + _logging_threshold = _logging_levels.index(level) + + if is_string(output): + output = open(output, "w") + + global _logging_output + _logging_output = output + +def disable_logging(): + debug("Disabling logging") + + global _logging_threshold + _logging_threshold = _disabled + +class logging_enabled(object): + def __init__(self, level="notice", output=None): + self.level = level + self.output = output + + def __enter__(self): + self.prev_level = _logging_levels[_logging_threshold] + self.prev_output = _logging_output + + if self.level == "disabled": + disable_logging() + else: + enable_logging(level=self.level, output=self.output) + + def __exit__(self, exc_type, exc_value, traceback): + if self.prev_level == "disabled": + disable_logging() + else: + enable_logging(level=self.prev_level, output=self.prev_output) + +class logging_disabled(logging_enabled): + def __init__(self): + super(logging_disabled, self).__init__(level="disabled") + +def fail(message, *args): + error(message, *args) + + if isinstance(message, BaseException): + raise message + + raise PlanoError(message.format(*args)) + +def error(message, *args): + log(_error, message, *args) + +def warn(message, *args): + log(_warn, message, *args) + +def notice(message, *args): + log(_notice, message, *args) + +def debug(message, *args): + log(_debug, message, *args) + +def log(level, message, *args): + if is_string(level): + level = _logging_levels.index(level) + + if _logging_threshold <= level: + _print_message(level, message, args) + +def _print_message(level, message, args): + out = nvl(_logging_output, _sys.stderr) + exception = None + + if isinstance(message, BaseException): + exception = message + message = "{0}: {1}".format(type(message).__name__, str(message)) + else: + message = str(message) + + if args: + message = message.format(*args) + + program = "{0}:".format(get_program_name()) + + level_color = ("cyan", "blue", "yellow", "red", None)[level] + level_bright = (False, False, False, True, False)[level] + level = cformat("{0:>6}:".format(_logging_levels[level]), color=level_color, bright=level_bright, file=out) + + print(program, level, capitalize(message), file=out) + + if exception is not None and hasattr(exception, "__traceback__"): + _traceback.print_exception(type(exception), exception, exception.__traceback__, file=out) + + out.flush() + +def _log(quiet, message, *args): + if quiet: + debug(message, *args) + else: + notice(message, *args) + +## Path operations + +def get_absolute_path(path): + return _os.path.abspath(path) + +def normalize_path(path): + return _os.path.normpath(path) + +def get_real_path(path): + return _os.path.realpath(path) + +def get_relative_path(path, start=None): + return _os.path.relpath(path, start=start) + +def get_file_url(path): + return "file:{0}".format(get_absolute_path(path)) + +def exists(path): + return _os.path.lexists(path) + +def is_absolute(path): + return _os.path.isabs(path) + +def is_dir(path): + return _os.path.isdir(path) + +def is_file(path): + return _os.path.isfile(path) + +def is_link(path): + return _os.path.islink(path) + +def join(*paths): + return _os.path.join(*paths) + +def split(path): + return _os.path.split(path) + +def split_extension(path): + return _os.path.splitext(path) + +def get_parent_dir(path): + path = normalize_path(path) + parent, child = split(path) + + return parent + +def get_base_name(path): + path = normalize_path(path) + parent, name = split(path) + + return name + +def get_name_stem(file): + name = get_base_name(file) + + if name.endswith(".tar.gz"): + name = name[:-3] + + stem, ext = split_extension(name) + + return stem + +def get_name_extension(file): + name = get_base_name(file) + stem, ext = split_extension(name) + + return ext + +def _check_path(path, test_func, message): + if not test_func(path): + found_paths = [repr(x) for x in list_dir(get_parent_dir(path))] + message = "{0}. The parent directory contains: {1}".format(message.format(repr(path)), ", ".join(found_paths)) + + raise PlanoError(message) + +def check_exists(path): + _check_path(path, exists, "File or directory {0} not found") + +def check_file(path): + _check_path(path, is_file, "File {0} not found") + +def check_dir(path): + _check_path(path, is_dir, "Directory {0} not found") + +def await_exists(path, timeout=30, quiet=False): + _log(quiet, "Waiting for path {0} to exist", repr(path)) + + timeout_message = "Timed out waiting for path {0} to exist".format(path) + period = 0.03125 + + with Timer(timeout=timeout, timeout_message=timeout_message) as timer: + while True: + try: + check_exists(path) + except PlanoError: + sleep(period, quiet=True) + period = min(1, period * 2) + else: + return + +## Port operations + +def get_random_port(min=49152, max=65535): + ports = [_random.randint(min, max) for _ in range(3)] + + for port in ports: + try: + check_port(port) + except PlanoError: + return port + + raise PlanoError("Random ports unavailable") + +def check_port(port, host="localhost"): + sock = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) + sock.setsockopt(_socket.SOL_SOCKET, _socket.SO_REUSEADDR, 1) + + if sock.connect_ex((host, port)) != 0: + raise PlanoError("Port {0} (host {1}) is not reachable".format(repr(port), repr(host))) + +def await_port(port, host="localhost", timeout=30, quiet=False): + _log(quiet, "Waiting for port {0}", port) + + if is_string(port): + port = int(port) + + timeout_message = "Timed out waiting for port {0} to open".format(port) + period = 0.03125 + + with Timer(timeout=timeout, timeout_message=timeout_message) as timer: + while True: + try: + check_port(port, host=host) + except PlanoError: + sleep(period, quiet=True) + period = min(1, period * 2) + else: + return + +## Process operations + +def get_process_id(): + return _os.getpid() + +def _format_command(command, represent=True): + if not is_string(command): + command = " ".join(command) + + if represent: + return repr(command) + else: + return command + +# quiet=False - Don't log at notice level +# stash=False - No output unless there is an error +# output= - Send stdout and stderr to a file +# stdin= - XXX +# stdout= - Send stdout to a file +# stderr= - Send stderr to a file +# shell=False - XXX +def start(command, stdin=None, stdout=None, stderr=None, output=None, shell=False, stash=False, quiet=False): + _log(quiet, "Starting command {0}", _format_command(command)) + + if output is not None: + stdout, stderr = output, output + + if is_string(stdin): + stdin = open(stdin, "r") + + if is_string(stdout): + stdout = open(stdout, "w") + + if is_string(stderr): + stderr = open(stderr, "w") + + if stdin is None: + stdin = _sys.stdin + + if stdout is None: + stdout = _sys.stdout + + if stderr is None: + stderr = _sys.stderr + + stash_file = None + + if stash: + stash_file = make_temp_file() + out = open(stash_file, "w") + stdout = out + stderr = out + + if shell: + if is_string(command): + args = command + else: + args = " ".join(command) + else: + if is_string(command): + args = _shlex.split(command) + else: + args = command + + try: + proc = PlanoProcess(args, stdin=stdin, stdout=stdout, stderr=stderr, shell=shell, close_fds=True, stash_file=stash_file) + except OSError as e: + raise PlanoError("Command {0}: {1}".format(_format_command(command), str(e))) + + debug("{0} started", proc) + + return proc + +def stop(proc, timeout=None, quiet=False): + _log(quiet, "Stopping {0}", proc) + + if proc.poll() is not None: + if proc.exit_code == 0: + debug("{0} already exited normally", proc) + elif proc.exit_code == -(_signal.SIGTERM): + debug("{0} was already terminated", proc) + else: + debug("{0} already exited with code {1}", proc, proc.exit_code) + + return proc + + kill(proc, quiet=True) + + return wait(proc, timeout=timeout, quiet=True) + +def kill(proc, quiet=False): + _log(quiet, "Killing {0}", proc) + + proc.terminate() + +def wait(proc, timeout=None, check=False, quiet=False): + _log(quiet, "Waiting for {0} to exit", proc) + + if PYTHON2: # pragma: nocover + assert timeout is None, "The timeout option is not supported on Python 2" + proc.wait() + else: + try: + proc.wait(timeout=timeout) + except _subprocess.TimeoutExpired: + raise PlanoTimeout() + + if proc.exit_code == 0: + debug("{0} exited normally", proc) + elif proc.exit_code < 0: + debug("{0} was terminated by signal {1}", proc, abs(proc.exit_code)) + else: + debug("{0} exited with code {1}", proc, proc.exit_code) + + if proc.stash_file is not None: + if proc.exit_code > 0: + eprint(read(proc.stash_file), end="") + + remove(proc.stash_file, quiet=True) + + if check and proc.exit_code > 0: + raise PlanoProcessError(proc) + + return proc + +# input= - Pipe to the process +def run(command, stdin=None, stdout=None, stderr=None, input=None, output=None, + stash=False, shell=False, check=True, quiet=False): + _log(quiet, "Running command {0}", _format_command(command)) + + if input is not None: + assert stdin in (None, _subprocess.PIPE), stdin + + input = input.encode("utf-8") + stdin = _subprocess.PIPE + + proc = start(command, stdin=stdin, stdout=stdout, stderr=stderr, output=output, + stash=stash, shell=shell, quiet=True) + + proc.stdout_result, proc.stderr_result = proc.communicate(input=input) + + if proc.stdout_result is not None: + proc.stdout_result = proc.stdout_result.decode("utf-8") + + if proc.stderr_result is not None: + proc.stderr_result = proc.stderr_result.decode("utf-8") + + return wait(proc, check=check, quiet=True) + +# input= - Pipe the given input into the process +def call(command, input=None, shell=False, quiet=False): + _log(quiet, "Calling {0}", _format_command(command)) + + proc = run(command, stdin=_subprocess.PIPE, stdout=_subprocess.PIPE, stderr=_subprocess.PIPE, + input=input, shell=shell, check=True, quiet=True) + + return proc.stdout_result + +def exit(arg=None, *args, **kwargs): + verbose = kwargs.get("verbose", False) + + if arg in (0, None): + if verbose: + notice("Exiting normally") + + _sys.exit() + + if is_string(arg): + if args: + arg = arg.format(*args) + + if verbose: + error(arg) + + _sys.exit(arg) + + if isinstance(arg, BaseException): + if verbose: + error(arg) + + _sys.exit(str(arg)) + + if isinstance(arg, int): + _sys.exit(arg) + + raise PlanoException("Illegal argument") + +_child_processes = list() + +class PlanoProcess(_subprocess.Popen): + def __init__(self, args, **options): + self.stash_file = options.pop("stash_file", None) + + super(PlanoProcess, self).__init__(args, **options) + + self.args = args + self.stdout_result = None + self.stderr_result = None + + _child_processes.append(self) + + @property + def exit_code(self): + return self.returncode + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, traceback): + kill(self) + + def __repr__(self): + return "process {0} (command {1})".format(self.pid, _format_command(self.args)) + +class PlanoProcessError(_subprocess.CalledProcessError, PlanoError): + def __init__(self, proc): + super(PlanoProcessError, self).__init__(proc.exit_code, _format_command(proc.args, represent=False)) + +def _default_sigterm_handler(signum, frame): + for proc in _child_processes: + if proc.poll() is None: + proc.terminate() + + exit(-(_signal.SIGTERM)) + +_signal.signal(_signal.SIGTERM, _default_sigterm_handler) + +## String operations + +def replace(string, expr, replacement, count=0): + return _re.sub(expr, replacement, string, count) + +def remove_prefix(string, prefix): + if string is None: + return "" + + if prefix and string.startswith(prefix): + string = string[len(prefix):] + + return string + +def remove_suffix(string, suffix): + if string is None: + return "" + + if suffix and string.endswith(suffix): + string = string[:-len(suffix)] + + return string + +def shorten(string, max, ellipsis=None): + assert max is None or isinstance(max, int) + + if string is None: + return "" + + if max is None or len(string) < max: + return string + else: + if ellipsis is not None: + string = string + ellipsis + end = _max(0, max - len(ellipsis)) + return string[0:end] + ellipsis + else: + return string[0:max] + +def plural(noun, count=0, plural=None): + if noun in (None, ""): + return "" + + if count == 1: + return noun + + if plural is None: + if noun.endswith("s"): + plural = "{0}ses".format(noun) + else: + plural = "{0}s".format(noun) + + return plural + +def capitalize(string): + if not string: + return "" + + return string[0].upper() + string[1:] + +def base64_encode(string): + return _base64.b64encode(string) + +def base64_decode(string): + return _base64.b64decode(string) + +def url_encode(string): + return _urlparse.quote_plus(string) + +def url_decode(string): + return _urlparse.unquote_plus(string) + +## Temp operations + +def get_system_temp_dir(): + return _tempfile.gettempdir() + +def get_user_temp_dir(): + try: + return _os.environ["XDG_RUNTIME_DIR"] + except KeyError: + return join(get_system_temp_dir(), get_user()) + +def make_temp_file(suffix="", dir=None): + if dir is None: + dir = get_system_temp_dir() + + return _tempfile.mkstemp(prefix="plano-", suffix=suffix, dir=dir)[1] + +def make_temp_dir(suffix="", dir=None): + if dir is None: + dir = get_system_temp_dir() + + return _tempfile.mkdtemp(prefix="plano-", suffix=suffix, dir=dir) + +class temp_file(object): + def __init__(self, suffix="", dir=None): + self.file = make_temp_file(suffix=suffix, dir=dir) + + def __enter__(self): + return self.file + + def __exit__(self, exc_type, exc_value, traceback): + remove(self.file, quiet=True) + +class temp_dir(object): + def __init__(self, suffix="", dir=None): + self.dir = make_temp_dir(suffix=suffix, dir=dir) + + def __enter__(self): + return self.dir + + def __exit__(self, exc_type, exc_value, traceback): + remove(self.dir, quiet=True) + +## Time operations + +def sleep(seconds, quiet=False): + _log(quiet, "Sleeping for {0} {1}", seconds, plural("second", seconds)) + + _time.sleep(seconds) + +def get_time(): + return _time.time() + +def format_duration(duration, align=False): + assert duration >= 0 + + if duration >= 3600: + value = duration / 3600 + unit = "h" + elif duration >= 5 * 60: + value = duration / 60 + unit = "m" + else: + value = duration + unit = "s" + + if align: + return "{0:.1f}{1}".format(value, unit) + elif value > 10: + return "{0:.0f}{1}".format(value, unit) + else: + return remove_suffix("{0:.1f}".format(value), ".0") + unit + +class Timer(object): + def __init__(self, timeout=None, timeout_message=None): + self.timeout = timeout + self.timeout_message = timeout_message + + self.start_time = None + self.stop_time = None + + def start(self): + self.start_time = get_time() + + if self.timeout is not None: + self.prev_handler = _signal.signal(_signal.SIGALRM, self.raise_timeout) + self.prev_timeout, prev_interval = _signal.setitimer(_signal.ITIMER_REAL, self.timeout) + self.prev_timer_suspend_time = get_time() + + assert prev_interval == 0.0, "This case is not yet handled" + + def stop(self): + self.stop_time = get_time() + + if self.timeout is not None: + assert get_time() - self.prev_timer_suspend_time > 0, "This case is not yet handled" + + _signal.signal(_signal.SIGALRM, self.prev_handler) + _signal.setitimer(_signal.ITIMER_REAL, self.prev_timeout) + + def __enter__(self): + self.start() + return self + + def __exit__(self, exc_type, exc_value, traceback): + self.stop() + + @property + def elapsed_time(self): + assert self.start_time is not None + + if self.stop_time is None: + return get_time() - self.start_time + else: + return self.stop_time - self.start_time + + def raise_timeout(self, *args): + raise PlanoTimeout(self.timeout_message) + +## Unique ID operations + +# Length in bytes, renders twice as long in hex +def get_unique_id(bytes=16): + assert bytes >= 1 + assert bytes <= 16 + + uuid_bytes = _uuid.uuid4().bytes + uuid_bytes = uuid_bytes[:bytes] + + return _binascii.hexlify(uuid_bytes).decode("utf-8") + +## Value operations + +def nvl(value, replacement): + if value is None: + return replacement + + return value + +def is_string(value): + return isinstance(value, str) + +def is_scalar(value): + return value is None or isinstance(value, (str, int, float, complex, bool)) + +def is_empty(value): + return value in (None, "", (), [], {}) + +def pformat(value): + return _pprint.pformat(value, width=120) + +def format_empty(value, replacement): + if is_empty(value): + value = replacement + + return value + +def format_not_empty(value, template=None): + if not is_empty(value) and template is not None: + value = template.format(value) + + return value + +def format_repr(obj, limit=None): + attrs = ["{0}={1}".format(k, repr(v)) for k, v in obj.__dict__.items()] + return "{0}({1})".format(obj.__class__.__name__, ", ".join(attrs[:limit])) + +class Namespace(object): + def __init__(self, **kwargs): + for name in kwargs: + setattr(self, name, kwargs[name]) + + def __eq__(self, other): + return vars(self) == vars(other) + + def __contains__(self, key): + return key in self.__dict__ + + def __repr__(self): + return format_repr(self) + +## YAML operations + +def read_yaml(file): + import yaml as _yaml + + with _codecs.open(file, encoding="utf-8", mode="r") as f: + return _yaml.safe_load(f) + +def write_yaml(file, data): + import yaml as _yaml + + make_parent_dir(file, quiet=True) + + with _codecs.open(file, encoding="utf-8", mode="w") as f: + _yaml.safe_dump(data, f) + + return file + +def parse_yaml(yaml): + import yaml as _yaml + return _yaml.safe_load(yaml) + +def emit_yaml(data): + import yaml as _yaml + return _yaml.safe_dump(data) + +## Test operations + +def test(_function=None, name=None, timeout=None, disabled=False): + class Test(object): + def __init__(self, function): + self.function = function + self.name = nvl(name, self.function.__name__) + self.timeout = timeout + self.disabled = disabled + + self.module = _inspect.getmodule(self.function) + + if not hasattr(self.module, "_plano_tests"): + self.module._plano_tests = list() + + self.module._plano_tests.append(self) + + def __call__(self, test_run): + try: + self.function() + except SystemExit as e: + error(e) + raise PlanoError("System exit with code {0}".format(e)) + + def __repr__(self): + return "test '{0}:{1}'".format(self.module.__name__, self.name) + + if _function is None: + return Test + else: + return Test(_function) + +def print_tests(modules): + if _inspect.ismodule(modules): + modules = (modules,) + + for module in modules: + for test in module._plano_tests: + print(test) + +def run_tests(modules, include="*", exclude=(), enable=(), test_timeout=300, fail_fast=False, verbose=False, quiet=False): + if _inspect.ismodule(modules): + modules = (modules,) + + if is_string(include): + include = (include,) + + if is_string(exclude): + exclude = (exclude,) + + if is_string(enable): + enable = (enable,) + + test_run = TestRun(test_timeout=test_timeout, fail_fast=fail_fast, verbose=verbose, quiet=quiet) + + if verbose: + notice("Starting {0}", test_run) + elif not quiet: + cprint("=== Configuration ===", color="cyan") + + props = ( + ("Modules", format_empty(", ".join([x.__name__ for x in modules]), "[none]")), + ("Test timeout", format_duration(test_timeout)), + ("Fail fast", fail_fast), + ) + + print_properties(props) + print() + + for module in modules: + if verbose: + notice("Running tests from module {0} (file {1})", repr(module.__name__), repr(module.__file__)) + elif not quiet: + cprint("=== Module {} ===".format(repr(module.__name__)), color="cyan") + + if not hasattr(module, "_plano_tests"): + warn("Module {0} has no tests", repr(module.__name__)) + continue + + for test in module._plano_tests: + included = any([_fnmatch.fnmatchcase(test.name, x) for x in include]) + excluded = any([_fnmatch.fnmatchcase(test.name, x) for x in exclude]) + disabled = test.disabled and not any([_fnmatch.fnmatchcase(test.name, x) for x in enable]) + + if included and not excluded and not disabled: + test_run.tests.append(test) + _run_test(test_run, test) + + if not verbose and not quiet: + print() + + total = len(test_run.tests) + skipped = len(test_run.skipped_tests) + failed = len(test_run.failed_tests) + + if total == 0: + raise PlanoError("No tests ran") + + if failed == 0: + result_message = "All tests passed ({0} skipped)".format(skipped) + else: + result_message = "{0} {1} failed ({2} skipped)".format(failed, plural("test", failed), skipped) + + if verbose: + if failed == 0: + notice(result_message) + else: + error(result_message) + elif not quiet: + cprint("=== Summary ===", color="cyan") + + props = ( + ("Total", total), + ("Skipped", skipped, format_not_empty(", ".join([x.name for x in test_run.skipped_tests]), "({0})")), + ("Failed", failed, format_not_empty(", ".join([x.name for x in test_run.failed_tests]), "({0})")), + ) + + print_properties(props) + print() + + cprint("=== RESULT ===", color="cyan") + + if failed == 0: + cprint(result_message, color="green") + else: + cprint(result_message, color="red", bright="True") + + print() + + if failed != 0: + raise PlanoError(result_message) + +def _run_test(test_run, test): + if test_run.verbose: + notice("Running {0}", test) + elif not test_run.quiet: + print("{0:.<72} ".format(test.name + " "), end="") + + timeout = nvl(test.timeout, test_run.test_timeout) + + with temp_file() as output_file: + try: + with Timer(timeout=timeout) as timer: + if test_run.verbose: + test(test_run) + else: + with output_redirected(output_file, quiet=True): + test(test_run) + except KeyboardInterrupt: + raise + except PlanoTestSkipped as e: + test_run.skipped_tests.append(test) + + if test_run.verbose: + notice("{0} SKIPPED ({1})", test, format_duration(timer.elapsed_time)) + elif not test_run.quiet: + _print_test_result("SKIPPED", timer, "yellow") + print("Reason: {0}".format(str(e))) + except Exception as e: + test_run.failed_tests.append(test) + + if test_run.verbose: + _traceback.print_exc() + + if isinstance(e, PlanoTimeout): + error("{0} **FAILED** (TIMEOUT) ({1})", test, format_duration(timer.elapsed_time)) + else: + error("{0} **FAILED** ({1})", test, format_duration(timer.elapsed_time)) + elif not test_run.quiet: + if isinstance(e, PlanoTimeout): + _print_test_result("**FAILED** (TIMEOUT)", timer, color="red", bright=True) + else: + _print_test_result("**FAILED**", timer, color="red", bright=True) + + _print_test_error(e) + _print_test_output(output_file) + + if test_run.fail_fast: + return True + else: + test_run.passed_tests.append(test) + + if test_run.verbose: + notice("{0} PASSED ({1})", test, format_duration(timer.elapsed_time)) + elif not test_run.quiet: + _print_test_result("PASSED", timer) + +def _print_test_result(status, timer, color="white", bright=False): + cprint("{0:<7}".format(status), color=color, bright=bright, end="") + print("{0:>6}".format(format_duration(timer.elapsed_time, align=True))) + +def _print_test_error(e): + cprint("--- Error ---", color="yellow") + + if isinstance(e, PlanoProcessError): + print("> {0}".format(str(e))) + else: + lines = _traceback.format_exc().rstrip().split("\n") + lines = ["> {0}".format(x) for x in lines] + + print("\n".join(lines)) + +def _print_test_output(output_file): + if get_file_size(output_file) == 0: + return + + cprint("--- Output ---", color="yellow") + + with open(output_file, "r") as out: + for line in out: + print("> {0}".format(line), end="") + +class TestRun(object): + def __init__(self, test_timeout=None, fail_fast=False, verbose=False, quiet=False): + self.test_timeout = test_timeout + self.fail_fast = fail_fast + self.verbose = verbose + self.quiet = quiet + + self.tests = list() + self.skipped_tests = list() + self.failed_tests = list() + self.passed_tests = list() + + def __repr__(self): + return format_repr(self) + +class expect_exception(object): + def __init__(self, exception_type=Exception, contains=None): + self.exception_type = exception_type + self.contains = contains + + def __enter__(self): + pass + + def __exit__(self, exc_type, exc_value, traceback): + if exc_value is None: + assert False, "Never encountered expected exception {0}".format(self.exception_type.__name__) + + if self.contains is None: + return isinstance(exc_value, self.exception_type) + else: + return isinstance(exc_value, self.exception_type) and self.contains in str(exc_value) + +class expect_error(expect_exception): + def __init__(self, contains=None): + super(expect_error, self).__init__(PlanoError, contains=contains) + +class expect_timeout(expect_exception): + def __init__(self, contains=None): + super(expect_timeout, self).__init__(PlanoTimeout, contains=contains) + +class expect_system_exit(expect_exception): + def __init__(self, contains=None): + super(expect_system_exit, self).__init__(SystemExit, contains=contains) + +class expect_output(temp_file): + def __init__(self, equals=None, contains=None, startswith=None, endswith=None): + super(expect_output, self).__init__() + self.equals = equals + self.contains = contains + self.startswith = startswith + self.endswith = endswith + + def __exit__(self, exc_type, exc_value, traceback): + result = read(self.file) + + if self.equals is None: + assert len(result) > 0, result + else: + assert result == self.equals, result + + if self.contains is not None: + assert self.contains in result, result + + if self.startswith is not None: + assert result.startswith(self.startswith), result + + if self.endswith is not None: + assert result.endswith(self.endswith), result + + super(expect_output, self).__exit__(exc_type, exc_value, traceback) + +class PlanoTestCommand(BaseCommand): + def __init__(self, test_modules=[]): + super(PlanoTestCommand, self).__init__() + + self.test_modules = test_modules + + if _inspect.ismodule(self.test_modules): + self.test_modules = [self.test_modules] + + self.parser = BaseArgumentParser() + self.parser.add_argument("include", metavar="PATTERN", nargs="*", default=["*"], + help="Run only tests with names matching PATTERN. This option can be repeated.") + self.parser.add_argument("-e", "--exclude", metavar="PATTERN", action="append", default=[], + help="Do not run tests with names matching PATTERN. This option can be repeated.") + self.parser.add_argument("-m", "--module", action="append", default=[], + help="Load tests from MODULE. This option can be repeated.") + self.parser.add_argument("-l", "--list", action="store_true", + help="Print the test names and exit") + self.parser.add_argument("--enable", metavar="PATTERN", action="append", default=[], + help="Enable disabled tests matching PATTERN. This option can be repeated.") + self.parser.add_argument("--timeout", metavar="SECONDS", type=int, default=300, + help="Fail any test running longer than SECONDS (default 300)") + self.parser.add_argument("--fail-fast", action="store_true", + help="Exit on the first failure encountered in a test run") + self.parser.add_argument("--iterations", metavar="COUNT", type=int, default=1, + help="Run the tests COUNT times (default 1)") + + def parse_args(self, args): + return self.parser.parse_args(args) + + def init(self, args): + self.list_only = args.list + self.include_patterns = args.include + self.exclude_patterns = args.exclude + self.enable_patterns = args.enable + self.timeout = args.timeout + self.fail_fast = args.fail_fast + self.iterations = args.iterations + + try: + for name in args.module: + self.test_modules.append(_import_module(name)) + except ImportError as e: + raise PlanoError(e) + + def run(self): + if self.list_only: + print_tests(self.test_modules) + return + + for i in range(self.iterations): + run_tests(self.test_modules, include=self.include_patterns, exclude=self.exclude_patterns, enable=self.enable_patterns, + test_timeout=self.timeout, fail_fast=self.fail_fast, verbose=self.verbose, quiet=self.quiet) + +## Plano command operations + +_command_help = { + "build": "Build artifacts from source", + "clean": "Clean up the source tree", + "dist": "Generate distribution artifacts", + "install": "Install the built artifacts on your system", + "test": "Run the tests", +} + +def command(_function=None, name=None, args=None, parent=None): + class Command(object): + def __init__(self, function): + self.function = function + self.module = _inspect.getmodule(self.function) + + self.name = name + self.args = args + self.parent = parent + + if self.parent is None: + self.name = nvl(self.name, function.__name__.replace("_", "-")) + self.args = self.process_args(self.args) + else: + self.name = nvl(self.name, self.parent.name) + self.args = nvl(self.args, self.parent.args) + + doc = _inspect.getdoc(self.function) + + if doc is None: + self.help = _command_help.get(self.name) + self.description = self.help + else: + self.help = doc.split("\n")[0] + self.description = doc + + if self.parent is not None: + self.help = nvl(self.help, self.parent.help) + self.description = nvl(self.description, self.parent.description) + + debug("Defining {0}", self) + + for arg in self.args.values(): + debug(" {0}", str(arg).capitalize()) + + def __repr__(self): + return "command '{0}:{1}'".format(self.module.__name__, self.name) + + def process_args(self, input_args): + sig = _inspect.signature(self.function) + params = list(sig.parameters.values()) + input_args = {x.name: x for x in nvl(input_args, ())} + output_args = _collections.OrderedDict() + + try: + app_param = params.pop(0) + except IndexError: + raise PlanoError("The function for {0} is missing the required 'app' parameter".format(self)) + else: + if app_param.name != "app": + raise PlanoError("The function for {0} is missing the required 'app' parameter".format(self)) + + for param in params: + try: + arg = input_args[param.name] + except KeyError: + arg = CommandArgument(param.name) + + if param.kind is param.POSITIONAL_ONLY: # pragma: nocover + arg.positional = True + elif param.kind is param.POSITIONAL_OR_KEYWORD and param.default is param.empty: + arg.positional = True + elif param.kind is param.POSITIONAL_OR_KEYWORD and param.default is not param.empty: + arg.optional = True + arg.default = param.default + elif param.kind is param.VAR_POSITIONAL: + arg.positional = True + arg.multiple = True + elif param.kind is param.VAR_KEYWORD: + continue + elif param.kind is param.KEYWORD_ONLY: + arg.optional = True + arg.default = param.default + else: # pragma: nocover + raise NotImplementedError(param.kind) + + if arg.type is None and arg.default not in (None, False): # XXX why false? + arg.type = type(arg.default) + + output_args[arg.name] = arg + + return output_args + + def __call__(self, app, *args, **kwargs): + assert isinstance(app, PlanoCommand), app + + command = app.bound_commands[self.name] + + if command is not self: + command(app, *args, **kwargs) + return + + debug("Running {0} {1} {2}".format(self, args, kwargs)) + + app.running_commands.append(self) + + dashes = "--" * len(app.running_commands) + display_args = list(self.get_display_args(args, kwargs)) + + with console_color("magenta", file=_sys.stderr): + eprint("{0}> {1}".format(dashes, self.name), end="") + + if display_args: + eprint(" ({0})".format(", ".join(display_args)), end="") + + eprint() + + self.function(app, *args, **kwargs) + + cprint("<{0} {1}".format(dashes, self.name), color="magenta", file=_sys.stderr) + + app.running_commands.pop() + + if app.running_commands: + name = app.running_commands[-1].name + + cprint("{0}| {1}".format(dashes[:-2], name), color="magenta", file=_sys.stderr) + + def super(self, app, *args, **kwargs): + assert isinstance(app, PlanoCommand), app + + if self.parent is None: + raise PlanoError("You called super() in a command with no parent ({0})".format(self)) + + self.parent.function(app, *args, **kwargs) + + def get_display_args(self, args, kwargs): + for i, arg in enumerate(self.args.values()): + if arg.positional: + if arg.multiple: + for va in args[i:]: + yield repr(va) + elif arg.optional: + value = args[i] + + if value == arg.default: + continue + + yield repr(value) + else: + yield repr(args[i]) + else: + value = kwargs.get(arg.name, arg.default) + + if value == arg.default: + continue + + if value in (True, False): + value = str(value).lower() + else: + value = repr(value) + + yield "{0}={1}".format(arg.display_name, value) + + if _function is None: + return Command + else: + return Command(_function) + +class CommandArgument(object): + def __init__(self, name, display_name=None, type=None, metavar=None, help=None, short_option=None, default=None, positional=False): + self.name = name + self.display_name = nvl(display_name, self.name.replace("_", "-")) + self.type = type + self.metavar = nvl(metavar, self.display_name.upper()) + self.help = help + self.short_option = short_option + self.default = default + self.positional = positional + + self.optional = False + self.multiple = False + + def __repr__(self): + return "argument '{0}' (default {1})".format(self.name, repr(self.default)) + +class PlanoCommand(BaseCommand): + def __init__(self, planofile=None): + self.planofile = planofile + + description = "Run commands defined as Python functions" + + self.pre_parser = BaseArgumentParser(description=description, add_help=False) + self.pre_parser.add_argument("-h", "--help", action="store_true", + help="Show this help message and exit") + + if self.planofile is None: + self.pre_parser.add_argument("-f", "--file", + help="Load commands from FILE (default 'Planofile' or '.planofile')") + + self.parser = _argparse.ArgumentParser(parents=(self.pre_parser,), add_help=False, allow_abbrev=False) + + self.bound_commands = _collections.OrderedDict() + self.running_commands = list() + + self.default_command_name = None + self.default_command_args = None + self.default_command_kwargs = None + + # def bind_commands(self, module): + # self._bind_commands(vars(module)) + + def set_default_command(self, name, *args, **kwargs): + self.default_command_name = name + self.default_command_args = args + self.default_command_kwargs = kwargs + + def parse_args(self, args): + pre_args, _ = self.pre_parser.parse_known_args(args) + + self._load_config(getattr(pre_args, "file", None)) + self._process_commands() + + return self.parser.parse_args(args) + + def init(self, args): + # XXX Can this move to the top of run? + if args.help or args.command is None and self.default_command_name is None: + self.parser.print_help() + self.init_only = True + return + + if args.command is None: + self.selected_command = self.bound_commands[self.default_command_name] + self.command_args = self.default_command_args + self.command_kwargs = self.default_command_kwargs + else: + self.selected_command = self.bound_commands[args.command] + self.command_args = list() + self.command_kwargs = dict() + + for arg in self.selected_command.args.values(): + if arg.positional: + if arg.multiple: + self.command_args.extend(getattr(args, arg.name)) + else: + self.command_args.append(getattr(args, arg.name)) + else: + self.command_kwargs[arg.name] = getattr(args, arg.name) + + def run(self): + with Timer() as timer: + self.selected_command(self, *self.command_args, **self.command_kwargs) + + cprint("OK", color="green", file=_sys.stderr, end="") + cprint(" ({0})".format(format_duration(timer.elapsed_time)), color="magenta", file=_sys.stderr) + + def _bind_commands(self, scope): + for var in scope.values(): + if callable(var) and var.__class__.__name__ == "Command": + self.bound_commands[var.name] = var + + def _load_config(self, planofile): + if planofile is None: + planofile = self.planofile + + if planofile is not None and is_dir(planofile): + planofile = self._find_planofile(planofile) + + if planofile is not None and not is_file(planofile): + exit("Planofile '{0}' not found", planofile) + + if planofile is None: + planofile = self._find_planofile(get_current_dir()) + + if planofile is None: + return + + debug("Loading '{0}'", planofile) + + _sys.path.insert(0, join(get_parent_dir(planofile), "python")) + + scope = dict(globals()) + scope["app"] = self + + try: + with open(planofile) as f: + exec(f.read(), scope) + except Exception as e: + error(e) + exit("Failure loading {0}: {1}", repr(planofile), str(e)) + + self._bind_commands(scope) + + def _find_planofile(self, dir): + for name in ("Planofile", ".planofile"): + path = join(dir, name) + + if is_file(path): + return path + + def _process_commands(self): + subparsers = self.parser.add_subparsers(title="commands", dest="command") + + for command in self.bound_commands.values(): + subparser = subparsers.add_parser(command.name, help=command.help, + description=nvl(command.description, command.help), + formatter_class=_argparse.RawDescriptionHelpFormatter) + + for arg in command.args.values(): + if arg.positional: + if arg.multiple: + subparser.add_argument(arg.name, metavar=arg.metavar, type=arg.type, help=arg.help, nargs="*") + elif arg.optional: + subparser.add_argument(arg.name, metavar=arg.metavar, type=arg.type, help=arg.help, nargs="?", default=arg.default) + else: + subparser.add_argument(arg.name, metavar=arg.metavar, type=arg.type, help=arg.help) + else: + flag_args = list() + + if arg.short_option is not None: + flag_args.append("-{0}".format(arg.short_option)) + + flag_args.append("--{0}".format(arg.display_name)) + + help = arg.help + + if arg.default not in (None, False): + if help is None: + help = "Default value is {0}".format(repr(arg.default)) + else: + help += " (default {0})".format(repr(arg.default)) + + if arg.default is False: + subparser.add_argument(*flag_args, dest=arg.name, default=arg.default, action="store_true", help=help) + else: + subparser.add_argument(*flag_args, dest=arg.name, default=arg.default, metavar=arg.metavar, type=arg.type, help=help) + + _capitalize_help(subparser) + +## Plano shell operations + +class PlanoShellCommand(BaseCommand): + def __init__(self): + self.parser = BaseArgumentParser() + self.parser.add_argument("file", metavar="FILE", nargs="?", + help="Read program from FILE") + self.parser.add_argument("arg", metavar="ARG", nargs="*", + help="Program arguments") + self.parser.add_argument("-c", "--command", + help="A program passed in as a string") + self.parser.add_argument("-i", "--interactive", action="store_true", + help="Operate interactively after running the program (if any)") + + def parse_args(self, args): + return self.parser.parse_args(args) + + def init(self, args): + self.file = args.file + self.interactive = args.interactive + self.command = args.command + + def run(self): + stdin_isatty = _os.isatty(_sys.stdin.fileno()) + script = None + + if self.file == "-": # pragma: nocover + script = _sys.stdin.read() + elif self.file is not None: + try: + with open(self.file) as f: + script = f.read() + except IOError as e: + raise PlanoError(e) + elif not stdin_isatty: # pragma: nocover + # Stdin is a pipe + script = _sys.stdin.read() + + if self.command is not None: + exec(self.command, globals()) + + if script is not None: + global ARGS + ARGS = ARGS[1:] + + exec(script, globals()) + + if (self.command is None and self.file is None and stdin_isatty) or self.interactive: # pragma: nocover + _code.InteractiveConsole(locals=globals()).interact() + +if PLANO_DEBUG: # pragma: nocover + enable_logging(level="debug") + +if __name__ == "__main__": # pragma: nocover + PlanoCommand().main() diff --git a/subrepos/skewer/subrepos/plano/python/plano_tests.py b/subrepos/skewer/subrepos/plano/python/plano_tests.py new file mode 100644 index 0000000..87f4a80 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/python/plano_tests.py @@ -0,0 +1,1131 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import os as _os +import pwd as _pwd +import signal as _signal +import socket as _socket +import sys as _sys +import threading as _threading + +try: + import http.server as _http +except ImportError: # pragma: nocover + import BaseHTTPServer as _http + +from plano import * + +test_project_dir = join(get_parent_dir(get_parent_dir(__file__)), "test-project") + +class test_project(working_dir): + def __enter__(self): + dir = super(test_project, self).__enter__() + copy(test_project_dir, ".", inside=False) + return dir + +TINY_INTERVAL = 0.05 + +@test +def archive_operations(): + with working_dir(): + make_dir("some-dir") + touch("some-dir/some-file") + + make_archive("some-dir") + assert is_file("some-dir.tar.gz") + + extract_archive("some-dir.tar.gz", output_dir="some-subdir") + assert is_dir("some-subdir/some-dir") + assert is_file("some-subdir/some-dir/some-file") + + rename_archive("some-dir.tar.gz", "something-else") + assert is_file("something-else.tar.gz") + + extract_archive("something-else.tar.gz") + assert is_dir("something-else") + assert is_file("something-else/some-file") + +@test +def command_operations(): + class SomeCommand(BaseCommand): + def __init__(self): + self.parser = BaseArgumentParser() + self.parser.add_argument("--interrupt", action="store_true") + self.parser.add_argument("--explode", action="store_true") + + def parse_args(self, args): + return self.parser.parse_args(args) + + def init(self, args): + self.verbose = args.verbose + self.interrupt = args.interrupt + self.explode = args.explode + + def run(self): + if self.verbose: + print("Hello") + + if self.interrupt: + raise KeyboardInterrupt() + + if self.explode: + raise PlanoError("Exploded") + + SomeCommand().main([]) + SomeCommand().main(["--interrupt"]) + + with expect_system_exit(): + SomeCommand().main(["--verbose", "--explode"]) + +@test +def console_operations(): + eprint("Here's a story") + eprint("About a", "man named Brady") + + pprint(list_dir()) + pprint(PlanoProcess, 1, "abc", end="\n\n") + + flush() + + with console_color("red"): + print("ALERT") + + print(cformat("AMBER ALERT", color="yellow")) + print(cformat("NO ALERT")) + + cprint("CRITICAL ALERT", color="red", bright=True) + +@test +def dir_operations(): + with working_dir(): + test_dir = make_dir("some-dir") + test_file_1 = touch(join(test_dir, "some-file-1")) + test_file_2 = touch(join(test_dir, "some-file-2")) + + result = list_dir(test_dir) + assert join(test_dir, result[0]) == test_file_1, (join(test_dir, result[0]), test_file_1) + + result = list_dir(test_dir, "*-file-1") + assert result == ["some-file-1"], (result, ["some-file-1"]) + + result = list_dir(test_dir, exclude="*-file-1") + assert result == ["some-file-2"], (result, ["some-file-2"]) + + result = list_dir("some-dir", "*.not-there") + assert result == [], result + + with working_dir(): + result = list_dir() + assert result == [], result + + result = find(test_dir) + assert result == [test_file_1, test_file_2], (result, [test_file_1, test_file_2]) + + result = find(test_dir, "*-file-1") + assert result == [test_file_1], (result, [test_file_1]) + + result = find(test_dir, exclude="*-file-1") + assert result == [test_file_2], (result, [test_file_2]) + + with working_dir(): + result = find() + assert result == [], result + + with working_dir(): + with working_dir("a-dir", quiet=True): + touch("a-file") + + curr_dir = get_current_dir() + prev_dir = change_dir("a-dir") + new_curr_dir = get_current_dir() + new_prev_dir = change_dir(curr_dir) + + assert curr_dir == prev_dir, (curr_dir, prev_dir) + assert new_curr_dir == new_prev_dir, (new_curr_dir, new_prev_dir) + +@test +def env_operations(): + result = join_path_var("a", "b", "c", "a") + assert result == "a:b:c", result + + curr_dir = get_current_dir() + + with working_dir("."): + assert get_current_dir() == curr_dir, (get_current_dir(), curr_dir) + + result = get_home_dir() + assert result == ENV["HOME"], result + + result = get_home_dir("alice") + assert result.endswith("alice"), result + + user = _pwd.getpwuid(_os.getuid())[0] + result = get_user() + assert result == user, (result, user) + + result = get_hostname() + assert result, result + + result = get_program_name() + assert result, result + + result = get_program_name("alpha beta") + assert result == "alpha", result + + result = get_program_name("X=Y alpha beta") + assert result == "alpha", result + + result = which("echo") + assert result, result + + with working_env(YES_I_AM_SET=1): + check_env("YES_I_AM_SET") + + with expect_error(): + check_env("NO_I_AM_NOT") + + with working_env(I_AM_SET_NOW=1, amend=False): + check_env("I_AM_SET_NOW") + assert "YES_I_AM_SET" not in ENV, ENV + + with working_env(SOME_VAR=1): + assert ENV["SOME_VAR"] == "1", ENV.get("SOME_VAR") + + with working_env(SOME_VAR=2): + assert ENV["SOME_VAR"] == "2", ENV.get("SOME_VAR") + + with expect_error(): + check_program("not-there") + + with expect_error(): + check_module("not_there") + + with expect_output(contains="ARGS:") as out: + with open(out, "w") as f: + print_env(file=f) + +@test +def file_operations(): + with working_dir(): + alpha_dir = make_dir("alpha-dir") + alpha_file = touch(join(alpha_dir, "alpha-file")) + alpha_link = make_link(join(alpha_dir, "alpha-file-link"), "alpha-file") + alpha_broken_link = make_link(join(alpha_dir, "broken-link"), "no-such-file") + + beta_dir = make_dir("beta-dir") + beta_file = touch(join(beta_dir, "beta-file")) + beta_link = make_link(join(beta_dir, "beta-file-link"), "beta-file") + beta_broken_link = make_link(join(beta_dir, "broken-link"), join("..", alpha_dir, "no-such-file")) + beta_another_link = make_link(join(beta_dir, "broken-link"), join("..", alpha_dir, "alpha-file-link")) + + assert exists(beta_link) + assert exists(beta_file) + + with working_dir("beta-dir"): + assert is_file(read_link("beta-file-link")) + + copied_file = copy(alpha_file, beta_dir) + assert copied_file == join(beta_dir, "alpha-file"), copied_file + assert is_file(copied_file), list_dir(beta_dir) + + copied_link = copy(beta_link, join(beta_dir, "beta-file-link-copy")) + assert copied_link == join(beta_dir, "beta-file-link-copy"), copied_link + assert is_link(copied_link), list_dir(beta_dir) + + copied_dir = copy(alpha_dir, beta_dir) + assert copied_dir == join(beta_dir, "alpha-dir"), copied_dir + assert is_link(join(copied_dir, "alpha-file-link")) + + moved_file = move(beta_file, alpha_dir) + assert moved_file == join(alpha_dir, "beta-file"), moved_file + assert is_file(moved_file), list_dir(alpha_dir) + assert not exists(beta_file), list_dir(beta_dir) + + moved_dir = move(beta_dir, alpha_dir) + assert moved_dir == join(alpha_dir, "beta-dir"), moved_dir + assert is_dir(moved_dir), list_dir(alpha_dir) + assert not exists(beta_dir) + + gamma_dir = make_dir("gamma-dir") + gamma_file = touch(join(gamma_dir, "gamma-file")) + + delta_dir = make_dir("delta-dir") + delta_file = touch(join(delta_dir, "delta-file")) + + copy(gamma_dir, delta_dir, inside=False) + assert is_file(join("delta-dir", "gamma-file")) + + move(gamma_dir, delta_dir, inside=False) + assert is_file(join("delta-dir", "gamma-file")) + assert not exists(gamma_dir) + + epsilon_dir = make_dir("epsilon-dir") + epsilon_file_1 = touch(join(epsilon_dir, "epsilon-file-1")) + epsilon_file_2 = touch(join(epsilon_dir, "epsilon-file-2")) + epsilon_file_3 = touch(join(epsilon_dir, "epsilon-file-3")) + epsilon_file_4 = touch(join(epsilon_dir, "epsilon-file-4")) + + remove("not-there") + + remove(epsilon_file_2) + assert not exists(epsilon_file_2) + + remove(epsilon_dir) + assert not exists(epsilon_file_1) + assert not exists(epsilon_dir) + + remove([epsilon_file_3, epsilon_file_4]) + assert not exists(epsilon_file_3) + assert not exists(epsilon_file_4) + + file = write("xes", "x" * 10) + result = get_file_size(file) + assert result == 10, result + +@test +def http_operations(): + class Handler(_http.BaseHTTPRequestHandler): + def do_GET(self): + self.send_response(200) + self.end_headers() + self.wfile.write(b"[1]") + + def do_POST(self): + length = int(self.headers["content-length"]) + content = self.rfile.read(length) + + self.send_response(200) + self.end_headers() + self.wfile.write(content) + + def do_PUT(self): + length = int(self.headers["content-length"]) + content = self.rfile.read(length) + + self.send_response(200) + self.end_headers() + + class ServerThread(_threading.Thread): + def __init__(self, server): + _threading.Thread.__init__(self) + self.server = server + + def run(self): + self.server.serve_forever() + + host, port = "localhost", get_random_port() + url = "http://{0}:{1}".format(host, port) + server = _http.HTTPServer((host, port), Handler) + server_thread = ServerThread(server) + + server_thread.start() + + try: + with working_dir(): + result = http_get(url) + assert result == "[1]", result + + result = http_get(url, insecure=True) + assert result == "[1]", result + + result = http_get(url, output_file="a") + output = read("a") + assert result is None, result + assert output == "[1]", output + + result = http_get_json(url) + assert result == [1], result + + file_b = write("b", "[2]") + + result = http_post(url, read(file_b), insecure=True) + assert result == "[2]", result + + result = http_post(url, read(file_b), output_file="x") + output = read("x") + assert result is None, result + assert output == "[2]", output + + result = http_post_file(url, file_b) + assert result == "[2]", result + + result = http_post_json(url, parse_json(read(file_b))) + assert result == [2], result + + file_c = write("c", "[3]") + + result = http_put(url, read(file_c), insecure=True) + assert result is None, result + + result = http_put_file(url, file_c) + assert result is None, result + + result = http_put_json(url, parse_json(read(file_c))) + assert result is None, result + finally: + server.shutdown() + server.server_close() + server_thread.join() + +@test +def io_operations(): + with working_dir(): + input_ = "some-text\n" + file_a = write("a", input_) + output = read(file_a) + + assert input_ == output, (input_, output) + + pre_input = "pre-some-text\n" + post_input = "post-some-text\n" + + prepend(file_a, pre_input) + append(file_a, post_input) + + output = tail(file_a, 100) + tailed = tail(file_a, 1) + + assert output.startswith(pre_input), (output, pre_input) + assert output.endswith(post_input), (output, post_input) + assert tailed == post_input, (tailed, post_input) + + input_lines = [ + "alpha\n", + "beta\n", + "gamma\n", + ] + + file_b = write_lines("b", input_lines) + output_lines = read_lines(file_b) + + assert input_lines == output_lines, (input_lines, output_lines) + + pre_lines = ["pre-alpha\n"] + post_lines = ["post-gamma\n"] + + prepend_lines(file_b, pre_lines) + append_lines(file_b, post_lines) + + output_lines = tail_lines(file_b, 100) + tailed_lines = tail_lines(file_b, 1) + + assert output_lines[0] == pre_lines[0], (output_lines[0], pre_lines[0]) + assert output_lines[4] == post_lines[0], (output_lines[4], post_lines[0]) + assert tailed_lines[0] == post_lines[0], (tailed_lines[0], post_lines[0]) + + file_c = touch("c") + assert is_file(file_c), file_c + + file_d = write("d", "front@middle@@middle@back") + replace_in_file(file_d, "@middle@", "M", count=1) + result = read(file_d) + assert result == "frontM@middle@back", result + +@test +def iterable_operations(): + result = unique([1, 1, 1, 2, 2, 3]) + assert result == [1, 2, 3], result + + result = skip([1, "", 2, None, 3]) + assert result == [1, 2, 3], result + + result = skip([1, "", 2, None, 3], 2) + assert result == [1, "", None, 3], result + +@test +def json_operations(): + with working_dir(): + input_data = { + "alpha": [1, 2, 3], + } + + file_a = write_json("a", input_data) + output_data = read_json(file_a) + + assert input_data == output_data, (input_data, output_data) + + json = read(file_a) + parsed_data = parse_json(json) + emitted_json = emit_json(input_data) + + assert input_data == parsed_data, (input_data, parsed_data) + assert json == emitted_json, (json, emitted_json) + +@test +def link_operations(): + with working_dir(): + make_dir("some-dir") + path = get_absolute_path(touch("some-dir/some-file")) + + with working_dir("another-dir"): + link = make_link("a-link", path) + linked_path = read_link(link) + assert linked_path == path, (linked_path, path) + +@test +def logging_operations(): + error("Error!") + warn("Warning!") + notice("Take a look!") + notice(123) + debug("By the way") + debug("abc{0}{1}{2}", 1, 2, 3) + + with expect_exception(RuntimeError): + fail(RuntimeError("Error!")) + + with expect_error(): + fail("Error!") + + for level in ("debug", "notice", "warn", "error"): + with expect_output(contains="Hello") as out: + with logging_disabled(): + with logging_enabled(level=level, output=out): + log(level, "hello") + + with expect_output(equals="") as out: + with logging_enabled(output=out): + with logging_disabled(): + error("Yikes") + +@test +def path_operations(): + with working_dir("/"): + curr_dir = get_current_dir() + assert curr_dir == "/", curr_dir + + path = "a/b/c" + result = get_absolute_path(path) + assert result == join(curr_dir, path), result + + path = "/x/y/z" + result = get_absolute_path(path) + assert result == path, result + + path = "/x/y/z" + assert is_absolute(path) + + path = "x/y/z" + assert not is_absolute(path) + + path = "a//b/../c/" + result = normalize_path(path) + assert result == "a/c", result + + path = "/a/../c" + result = get_real_path(path) + assert result == "/c", result + + path = "/a/b" + result = get_relative_path(path, "/a/c") + assert result == "../b", result + + path = "/a/b" + result = get_file_url(path) + assert result == "file:/a/b", result + + with working_dir(): + result = get_file_url("afile") + assert result == "file:{0}/afile".format(get_current_dir()), result + + path = "/alpha/beta.ext" + path_split = "/alpha", "beta.ext" + path_split_extension = "/alpha/beta", ".ext" + name_split_extension = "beta", ".ext" + + result = join(*path_split) + assert result == path, result + + result = split(path) + assert result == path_split, result + + result = split_extension(path) + assert result == path_split_extension, result + + result = get_parent_dir(path) + assert result == path_split[0], result + + result = get_base_name(path) + assert result == path_split[1], result + + result = get_name_stem(path) + assert result == name_split_extension[0], result + + result = get_name_stem("alpha.tar.gz") + assert result == "alpha", result + + result = get_name_extension(path) + assert result == name_split_extension[1], result + + with working_dir(): + touch("adir/afile") + + check_exists("adir") + check_exists("adir/afile") + check_dir("adir") + check_file("adir/afile") + + with expect_error(): + check_exists("adir/notafile") + + with expect_error(): + check_file("adir/notafile") + + with expect_error(): + check_file("adir") + + with expect_error(): + check_dir("not-there") + + with expect_error(): + check_dir("adir/afile") + + await_exists("adir/afile") + + with expect_timeout(): + await_exists("adir/notafile", timeout=TINY_INTERVAL) + +@test +def port_operations(): + result = get_random_port() + assert result >= 49152 and result <= 65535, result + + server_port = get_random_port() + server_socket = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) + + try: + server_socket.bind(("localhost", server_port)) + server_socket.listen(5) + + await_port(server_port) + await_port(str(server_port)) + + check_port(server_port) + + with expect_error(): + get_random_port(min=server_port, max=server_port) + finally: + server_socket.close() + + with expect_timeout(): + await_port(get_random_port(), timeout=TINY_INTERVAL) + +@test +def process_operations(): + result = get_process_id() + assert result, result + + proc = run("date") + assert proc is not None, proc + + print(repr(proc)) + + run("date", stash=True) + + proc = run(["echo", "hello"], check=False) + assert proc.exit_code == 0, proc.exit_code + + proc = run("cat /uh/uh", check=False) + assert proc.exit_code > 0, proc.exit_code + + with expect_output() as out: + run("date", output=out) + + run("date", output=DEVNULL) + run("date", stdin=DEVNULL) + run("date", stdout=DEVNULL) + run("date", stderr=DEVNULL) + + run("echo hello", quiet=True) + run("echo hello | cat", shell=True) + run(["echo", "hello"], shell=True) + + with expect_error(): + run("/not/there") + + with expect_error(): + run("cat /whoa/not/really", stash=True) + + result = call("echo hello") + assert result == "hello\n", result + + result = call("echo hello | cat", shell=True) + assert result == "hello\n", result + + with expect_error(): + call("cat /whoa/not/really") + + if PYTHON3: + proc = start("sleep 10") + + with expect_timeout(): + wait(proc, timeout=TINY_INTERVAL) + + proc = start("echo hello") + sleep(TINY_INTERVAL) + stop(proc) + + proc = start("sleep 10") + stop(proc) + + proc = start("sleep 10") + kill(proc) + sleep(TINY_INTERVAL) + stop(proc) + + proc = start("date --not-there") + sleep(TINY_INTERVAL) + stop(proc) + + with start("sleep 10"): + sleep(TINY_INTERVAL) + + with working_dir(): + touch("i") + + with start("date", stdin="i", stdout="o", stderr="e"): + pass + + with expect_system_exit(): + exit() + + with expect_system_exit(): + exit(verbose=True) + + with expect_system_exit(): + exit("abc") + + with expect_system_exit(): + exit("abc", verbose=True) + + with expect_system_exit(): + exit(Exception()) + + with expect_system_exit(): + exit(Exception(), verbose=True) + + with expect_system_exit(): + exit(123) + + with expect_system_exit(): + exit(123, verbose=True) + + with expect_system_exit(): + exit(-123) + + with expect_exception(PlanoException): + exit(object()) + +@test +def string_operations(): + result = replace("ab", "a", "b") + assert result == "bb", result + + result = replace("aba", "a", "b", count=1) + assert result == "bba", result + + result = remove_prefix(None, "xxx") + assert result == "", result + + result = remove_prefix("anterior", "ant") + assert result == "erior", result + + result = remove_prefix("anterior", "ext") + assert result == "anterior", result + + result = remove_suffix(None, "xxx") + assert result == "", result + + result = remove_suffix("exterior", "ior") + assert result == "exter", result + + result = remove_suffix("exterior", "nal") + assert result == "exterior" + + result = shorten("abc", 2) + assert result == "ab", result + + result = shorten("abc", None) + assert result == "abc", result + + result = shorten("abc", 10) + assert result == "abc", result + + result = shorten("ellipsis", 6, ellipsis="...") + assert result == "ell...", result + + result = shorten(None, 6) + assert result == "", result + + result = plural(None) + assert result == "", result + + result = plural("") + assert result == "", result + + result = plural("test") + assert result == "tests", result + + result = plural("test", 1) + assert result == "test", result + + result = plural("bus") + assert result == "busses", result + + result = plural("bus", 1) + assert result == "bus", result + + result = plural("terminus", 2, "termini") + assert result == "termini", result + + result = capitalize(None) + assert result == "", result + + result = capitalize("") + assert result == "", result + + result = capitalize("hello, Frank") + assert result == "Hello, Frank", result + + encoded_result = base64_encode(b"abc") + decoded_result = base64_decode(encoded_result) + assert decoded_result == b"abc", decoded_result + + encoded_result = url_encode("abc=123&yeah!") + decoded_result = url_decode(encoded_result) + assert decoded_result == "abc=123&yeah!", decoded_result + +@test +def temp_operations(): + system_temp_dir = get_system_temp_dir() + + result = make_temp_file() + assert result.startswith(system_temp_dir), result + + result = make_temp_file(suffix=".txt") + assert result.endswith(".txt"), result + + result = make_temp_dir() + assert result.startswith(system_temp_dir), result + + with temp_dir() as d: + assert is_dir(d), d + list_dir(d) + + with temp_file() as f: + assert is_file(f), f + write(f, "test") + + with working_dir() as d: + assert is_dir(d), d + list_dir(d) + + user_temp_dir = get_user_temp_dir() + assert user_temp_dir, user_temp_dir + + ENV.pop("XDG_RUNTIME_DIR", None) + + user_temp_dir = get_user_temp_dir() + assert user_temp_dir, user_temp_dir + +@test +def test_operations(): + with test_project(): + with working_module_path("python"): + import chucker + import chucker_tests + + print_tests(chucker_tests) + + for verbose in (False, True): + run_tests(chucker_tests, verbose=verbose) + run_tests(chucker_tests, exclude="*hello*", verbose=verbose) + + with expect_error(): + run_tests(chucker, verbose=verbose) + + with expect_error(): + run_tests(chucker_tests, enable="*badbye*", verbose=verbose) + + with expect_error(): + run_tests(chucker_tests, enable="*badbye*", fail_fast=True, verbose=verbose) + + with expect_exception(KeyboardInterrupt): + run_tests(chucker_tests, enable="test_keyboard_interrupt", verbose=verbose) + + with expect_error(): + run_tests(chucker_tests, enable="test_timeout", verbose=verbose) + + with expect_error(): + run_tests(chucker_tests, enable="test_process_error", verbose=verbose) + + with expect_error(): + run_tests(chucker_tests, enable="test_system_exit", verbose=verbose) + + with expect_system_exit(): + PlanoTestCommand().main(["--module", "nosuchmodule"]) + + def run_command(*args): + PlanoTestCommand(chucker_tests).main(args) + + run_command("--verbose") + run_command("--list") + + with expect_system_exit(): + run_command("--enable", "*badbye*") + + with expect_system_exit(): + run_command("--enable", "*badbye*", "--verbose") + + try: + with expect_exception(): + pass + raise Exception() # pragma: nocover + except AssertionError: + pass + + with expect_output(equals="abc123", contains="bc12", startswith="abc", endswith="123") as out: + write(out, "abc123") + +@test +def time_operations(): + start_time = get_time() + + sleep(TINY_INTERVAL) + + assert get_time() - start_time > TINY_INTERVAL + + with expect_system_exit(): + with start("sleep 10"): + from plano import _default_sigterm_handler + _default_sigterm_handler(_signal.SIGTERM, None) + + result = format_duration(0.1) + assert result == "0.1s", result + + result = format_duration(1) + assert result == "1s", result + + result = format_duration(1, align=True) + assert result == "1.0s", result + + result = format_duration(60) + assert result == "60s", result + + result = format_duration(3600) + assert result == "1h", result + + with Timer() as timer: + sleep(TINY_INTERVAL) + assert timer.elapsed_time > TINY_INTERVAL + + assert timer.elapsed_time > TINY_INTERVAL + + with expect_timeout(): + with Timer(timeout=TINY_INTERVAL) as timer: + sleep(10) + +@test +def unique_id_operations(): + id1 = get_unique_id() + id2 = get_unique_id() + + assert id1 != id2, (id1, id2) + + result = get_unique_id(1) + assert len(result) == 2 + + result = get_unique_id(16) + assert len(result) == 32 + +@test +def value_operations(): + result = nvl(None, "a") + assert result == "a", result + + result = nvl("b", "a") + assert result == "b", result + + assert is_string("a") + assert not is_string(1) + + for value in (None, "", (), [], {}): + assert is_empty(value), value + + for value in (object(), " ", (1,), [1], {"a": 1}): + assert not is_empty(value), value + + result = pformat({"z": 1, "a": 2}) + assert result == "{'a': 2, 'z': 1}", result + + result = format_empty((), "[nothing]") + assert result == "[nothing]", result + + result = format_empty((1,), "[nothing]") + assert result == (1,), result + + result = format_not_empty("abc", "[{0}]") + assert result == "[abc]", result + + result = format_not_empty({}, "[{0}]") + assert result == {}, result + + result = format_repr(Namespace(a=1, b=2), limit=1) + assert result == "Namespace(a=1)", result + + result = Namespace(a=1, b=2) + assert result.a == 1, result + assert result.b == 2, result + assert "a" in result, result + assert "c" not in result, result + repr(result) + + other = Namespace(a=1, b=2, c=3) + assert result != other, (result, other) + +@test +def yaml_operations(): + try: + import yaml as _yaml + except ImportError: + raise PlanoTestSkipped("PyYAML is not available") + + with working_dir(): + input_data = { + "alpha": [1, 2, 3], + } + + file_a = write_yaml("a", input_data) + output_data = read_yaml(file_a) + + assert input_data == output_data, (input_data, output_data) + + yaml = read(file_a) + parsed_data = parse_yaml(yaml) + emitted_yaml = emit_yaml(input_data) + + assert input_data == parsed_data, (input_data, parsed_data) + assert yaml == emitted_yaml, (yaml, emitted_yaml) + +@test +def plano_command(): + if PYTHON2: # pragma: nocover + raise PlanoTestSkipped("The plano command is not supported on Python 2") + + with working_dir(): + PlanoCommand().main([]) + + with working_dir(): + write("Planofile", "garbage") + + with expect_system_exit(): + PlanoCommand().main([]) + + with expect_system_exit(): + PlanoCommand("no-such-file").main([]) + + with expect_system_exit(): + PlanoCommand().main(["-f", "no-such-file"]) + + def run_command(*args): + PlanoCommand().main(["-f", test_project_dir] + list(args)) + + with test_project(): + run_command() + run_command("--help") + run_command("--quiet") + run_command("--init-only") + + run_command("build") + run_command("install") + run_command("clean") + + with expect_system_exit(): + run_command("build", "--help") + + with expect_system_exit(): + run_command("no-such-command") + + with expect_system_exit(): + run_command("no-such-command", "--help") + + with expect_system_exit(): + run_command("--help", "no-such-command") + + run_command("extended-command", "a", "b", "--omega", "z") + + with expect_system_exit(): + run_command("echo") + + with expect_exception(contains="Trouble"): + run_command("echo", "Hello", "--trouble") + + run_command("echo", "Hello", "--count", "5") + + with expect_system_exit(): + run_command("echo", "Hello", "--count", "not-an-int") + + run_command("haberdash", "ballcap", "fedora", "hardhat", "--last", "turban") + result = read_json("haberdash.json") + assert result == ["ballcap", "fedora", "hardhat", "turban"], result + + run_command("haberdash", "ballcap", "--last", "turban") + result = read_json("haberdash.json") + assert result == ["ballcap", "turban"], result + + run_command("haberdash", "ballcap") + result = read_json("haberdash.json") + assert result == ["ballcap", "bowler"], result + + run_command("balderdash", "bunk", "poppycock") + result = read_json("balderdash.json") + assert result == ["bunk", "poppycock", "rubbish"], result + + run_command("balderdash", "bunk") + result = read_json("balderdash.json") + assert result == ["bunk", "malarkey", "rubbish"], result + + run_command("balderdash", "bunk", "--other", "bollocks") + result = read_json("balderdash.json") + assert result == ["bunk", "malarkey", "bollocks"], result + +@test +def plano_shell_command(): + python_dir = get_absolute_path("python") + + with working_dir(): + write("script1", "garbage") + + with expect_exception(NameError): + PlanoShellCommand().main(["script1"]) + + write("script2", "print_env()") + + PlanoShellCommand().main(["script2"]) + + PlanoShellCommand().main(["--command", "print_env()"]) + + write("command", "from plano import *; PlanoShellCommand().main()") + + with working_env(PYTHONPATH=python_dir): + run("{0} command".format(_sys.executable), input="cprint('Hi!', color='green'); exit()") + run("echo \"cprint('Bi!', color='red')\" | {0} command -".format(_sys.executable), shell=True) + + with expect_system_exit(): + PlanoShellCommand().main(["no-such-file"]) diff --git a/subrepos/skewer/subrepos/plano/scripts/devel.sh b/subrepos/skewer/subrepos/plano/scripts/devel.sh new file mode 100644 index 0000000..88b563c --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/devel.sh @@ -0,0 +1,2 @@ +export PATH=$(echo $PWD/build/scripts-*):$PATH +export PYTHONPATH=$PWD/build/lib:$PWD/python diff --git a/subrepos/skewer/subrepos/plano/scripts/test b/subrepos/skewer/subrepos/plano/scripts/test new file mode 100755 index 0000000..40a1897 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/test @@ -0,0 +1,31 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * + +if __name__ == "__main__": + import plano_tests + test_modules = [plano_tests] + + if PYTHON3: + import bullseye_tests + test_modules.append(bullseye_tests) + + PlanoTestCommand(test_modules).main() diff --git a/subrepos/skewer/subrepos/plano/scripts/test-bootstrap.dockerfile b/subrepos/skewer/subrepos/plano/scripts/test-bootstrap.dockerfile new file mode 100644 index 0000000..f6e0416 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/test-bootstrap.dockerfile @@ -0,0 +1,46 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +FROM fedora + +RUN dnf -qy update && dnf -q clean all + +RUN dnf -y install python git + +COPY test-project /root/test-project +COPY bin/plano /root/test-project/plano + +WORKDIR /root/test-project +RUN git init + +RUN mkdir /root/test-project/modules + +WORKDIR /root/test-project/modules +RUN git submodule add https://github.com/ssorj/plano.git + +WORKDIR /root/test-project/python +RUN ln -s ../modules/plano/python/plano.py +RUN ln -s ../modules/plano/python/bullseye.py + +WORKDIR /root/test-project +RUN ./plano || : + +RUN git submodule update --init + +CMD ["./plano"] diff --git a/subrepos/skewer/subrepos/plano/scripts/test-centos-7.dockerfile b/subrepos/skewer/subrepos/plano/scripts/test-centos-7.dockerfile new file mode 100644 index 0000000..89d107e --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/test-centos-7.dockerfile @@ -0,0 +1,30 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +FROM centos:7 + +RUN yum -q -y update && yum -q clean all + +RUN yum -y install epel-release + +RUN yum -y install make python2-pyyaml python36 python36-PyYAML + +COPY . /root/plano +WORKDIR /root/plano +CMD ["make", "clean", "test", "install", "PREFIX=/usr/local"] diff --git a/subrepos/skewer/subrepos/plano/scripts/test-centos-8.dockerfile b/subrepos/skewer/subrepos/plano/scripts/test-centos-8.dockerfile new file mode 100644 index 0000000..5febac2 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/test-centos-8.dockerfile @@ -0,0 +1,28 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +FROM centos:8 + +RUN dnf -qy update && dnf -q clean all + +RUN dnf -y install make python2 python2-pyyaml python3 python3-pyyaml + +COPY . /root/plano +WORKDIR /root/plano +CMD ["make", "clean", "test", "install", "PREFIX=/usr/local"] diff --git a/subrepos/skewer/subrepos/plano/scripts/test-fedora.dockerfile b/subrepos/skewer/subrepos/plano/scripts/test-fedora.dockerfile new file mode 100644 index 0000000..def55a1 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/test-fedora.dockerfile @@ -0,0 +1,28 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +FROM fedora + +RUN dnf -qy update && dnf -q clean all + +RUN dnf -y install make python2 findutils python3-pyyaml + +COPY . /root/plano +WORKDIR /root/plano +CMD ["make", "clean", "test", "install", "PREFIX=/usr/local"] diff --git a/subrepos/skewer/subrepos/plano/scripts/test-ubuntu.dockerfile b/subrepos/skewer/subrepos/plano/scripts/test-ubuntu.dockerfile new file mode 100644 index 0000000..c9e18c9 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/scripts/test-ubuntu.dockerfile @@ -0,0 +1,28 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +FROM ubuntu + +RUN apt-get update -qq && apt-get upgrade -y -qq + +RUN apt-get -y install curl make python python3 python3-distutils python3-yaml + +COPY . /root/plano +WORKDIR /root/plano +CMD ["make", "test", "install", "PREFIX=/usr/local"] diff --git a/subrepos/skewer/subrepos/plano/setup.py b/subrepos/skewer/subrepos/plano/setup.py new file mode 100755 index 0000000..66a0ef0 --- /dev/null +++ b/subrepos/skewer/subrepos/plano/setup.py @@ -0,0 +1,75 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import collections +import os +import tempfile + +from distutils.core import setup +from distutils.command.build_scripts import build_scripts +from distutils.file_util import copy_file + +class _build_scripts(build_scripts): + def run(self): + try: + prefix = self.distribution.command_options["install"]["prefix"][1] + except KeyError: + prefix = "/usr/local" + + temp_dir = tempfile.mkdtemp() + default_home = os.path.join(prefix, "lib", "plano") + + for name in os.listdir("bin"): + if name.endswith(".in"): + in_path = os.path.join("bin", name) + out_path = os.path.join(temp_dir, name[:-3]) + + content = open(in_path).read() + content = content.replace("@default_home@", default_home) + open(out_path, "w").write(content) + + self.scripts.remove(in_path) + self.scripts.append(out_path) + + super(_build_scripts, self).run() + +def find_data_files(dir, output_prefix): + data_files = collections.defaultdict(list) + + for root, dirs, files in os.walk(dir): + for name in files: + data_files[os.path.join(output_prefix, root)].append(os.path.join(root, name)) + + return [(k, v) for k, v in data_files.items()] + +setup(name="plano", + version="1.0.0-SNAPSHOT", + url="https://github.com/ssorj/plano", + author="Justin Ross", + author_email="justin.ross@gmail.com", + cmdclass={'build_scripts': _build_scripts}, + py_modules=["plano"], + package_dir={"": "python"}, + data_files=[("lib/plano/python", ["python/plano_tests.py", + "python/bullseye.py", + "python/bullseye.strings", + "python/bullseye_tests.py"]), + *find_data_files("test-project", "lib/plano")], + scripts=["bin/plano", "bin/planosh", "bin/planotest", "bin/plano-self-test.in"]) diff --git a/subrepos/skewer/subrepos/plano/test-project/Planofile b/subrepos/skewer/subrepos/plano/test-project/Planofile new file mode 100644 index 0000000..c041efa --- /dev/null +++ b/subrepos/skewer/subrepos/plano/test-project/Planofile @@ -0,0 +1,75 @@ +from bullseye import * + +app.set_default_command("build", prefix="/tmp/alpha") + +project.name = "chucker" +project.data_dirs = ["files"] +project.excluded_modules = ["flipper"] +project.test_modules = ["chucker_tests"] + +result_file = "build/result.json" + +@command(parent=build) +def build(app, *args, **kwargs): + build.super(app, *args, **kwargs) + + notice("Extended building") + + data = {"built": True} + write_json(result_file, data) + +@command(parent=test) +def test(app, *args, **kwargs): + test.super(app, *args, **kwargs) + + notice("Extended testing") + + check_file(result_file) + + if exists(result_file): + data = read_json(result_file) + data["tested"] = True + write_json(result_file, data) + +@command(parent=install) +def install(app, *args, **kwargs): + install.super(app, *args, **kwargs) + + notice("Extended installing") + + data = read_json(result_file) + data["installed"] = True + write_json(result_file, data) + +@command +def base_command(app, alpha, beta, omega="x"): + print("base", alpha, beta, omega) + +@command(name="extended-command", parent=base_command) +def extended_command(app, alpha, beta, omega="y"): + print("extended", alpha, omega) + extended_command.super(app, alpha, beta, omega) + +@command(args=(CommandArgument("message_", help="The message to print", display_name="message"), + CommandArgument("count", help="Print the message COUNT times"), + CommandArgument("extra", default=1, short_option="e"))) +def echo(app, message_, count=1, extra=None, trouble=False): + """Print a message to the console""" + + print("Echoing (message={0}, count={1})".format(message_, count)) + + if trouble: + raise Exception("Trouble") + + for i in range(count): + print(message_) + +@command +def haberdash(app, first, *middle, last="bowler"): + data = [first, *middle, last] + write_json("haberdash.json", data) + +@command(args=(CommandArgument("optional", positional=True),)) +def balderdash(app, required, optional="malarkey", other="rubbish"): + data = [required, optional, other] + write_json("balderdash.json", data) diff --git a/subrepos/skewer/subrepos/plano/test-project/bin/chucker-test b/subrepos/skewer/subrepos/plano/test-project/bin/chucker-test new file mode 100644 index 0000000..e69de29 diff --git a/subrepos/skewer/subrepos/plano/test-project/bin/chucker.in b/subrepos/skewer/subrepos/plano/test-project/bin/chucker.in new file mode 100644 index 0000000..f338f8a --- /dev/null +++ b/subrepos/skewer/subrepos/plano/test-project/bin/chucker.in @@ -0,0 +1 @@ +@default_home@ diff --git a/subrepos/skewer/subrepos/plano/test-project/files/notes.txt b/subrepos/skewer/subrepos/plano/test-project/files/notes.txt new file mode 100644 index 0000000..e69de29 diff --git a/subrepos/skewer/subrepos/plano/test-project/python/chucker.py b/subrepos/skewer/subrepos/plano/test-project/python/chucker.py new file mode 100644 index 0000000..e69de29 diff --git a/subrepos/skewer/subrepos/plano/test-project/python/chucker_tests.py b/subrepos/skewer/subrepos/plano/test-project/python/chucker_tests.py new file mode 100644 index 0000000..95a1c6b --- /dev/null +++ b/subrepos/skewer/subrepos/plano/test-project/python/chucker_tests.py @@ -0,0 +1,35 @@ +from plano import * + +@test +def test_hello(): + print("Hello") + +@test +def test_goodbye(): + print("Goodbye") + +@test(disabled=True) +def test_badbye(): + print("Badbye") + assert False + +@test +def test_skipped(): + raise PlanoTestSkipped("Test coverage") + +@test(disabled=True) +def test_keyboard_interrupt(): + raise KeyboardInterrupt() + +@test(disabled=True, timeout=0.05) +def test_timeout(): + sleep(10, quiet=True) + assert False + +@test(disabled=True) +def test_process_error(): + run("expr 1 / 0") + +@test(disabled=True) +def test_system_exit(): + exit(1) diff --git a/subrepos/skewer/subrepos/plano/test-project/python/flipper.py b/subrepos/skewer/subrepos/plano/test-project/python/flipper.py new file mode 100644 index 0000000..e69de29 diff --git a/subrepos/skewer/test-example/.gitignore b/subrepos/skewer/test-example/.gitignore new file mode 100644 index 0000000..7bd2dc8 --- /dev/null +++ b/subrepos/skewer/test-example/.gitignore @@ -0,0 +1 @@ +/README.html diff --git a/subrepos/skewer/test-example/Planofile b/subrepos/skewer/test-example/Planofile new file mode 100644 index 0000000..cd893fd --- /dev/null +++ b/subrepos/skewer/test-example/Planofile @@ -0,0 +1,37 @@ +from skewer import * + +@command +def generate(app): + """ + Generate README.md from the data in skewer.yaml + """ + generate_readme("skewer.yaml", "README.md") + +@command +def render(app): + """ + Render README.html from the data in skewer.yaml + """ + check_program("pandoc") + + generate(app) + + run(f"pandoc -o README.html README.md") + + print(f"file:{get_real_path('README.html')}") + +@command +def test(app): + """ + Test the example using Minikube + """ + generate_readme("skewer.yaml", make_temp_file()) + run_steps_on_minikube("skewer.yaml") + +@command +def test_external(app, west_kubeconfig, east_kubeconfig): + """ + Test the example against external clusters + """ + generate_readme("skewer.yaml", make_temp_file()) + run_steps_external("skewer.yaml", west=west_kubeconfig, east=east_kubeconfig) diff --git a/subrepos/skewer/test-example/README.md b/subrepos/skewer/test-example/README.md new file mode 100644 index 0000000..419fb92 --- /dev/null +++ b/subrepos/skewer/test-example/README.md @@ -0,0 +1,355 @@ +# Skupper Hello World + +[![main](https://github.com/skupperproject/skewer/actions/workflows/main.yaml/badge.svg)](https://github.com/skupperproject/skewer/actions/workflows/main.yaml) + +#### A minimal HTTP application deployed across Kubernetes clusters using Skupper + +This example is part of a [suite of examples][examples] showing the +different ways you can use [Skupper][website] to connect services +across cloud providers, data centers, and edge sites. + +[website]: https://skupper.io/ +[examples]: https://skupper.io/examples/index.html + +#### Contents + +* [Overview](#overview) +* [Prerequisites](#prerequisites) +* [Step 1: Configure separate console sessions](#step-1-configure-separate-console-sessions) +* [Step 2: Access your clusters](#step-2-access-your-clusters) +* [Step 3: Set up your namespaces](#step-3-set-up-your-namespaces) +* [Step 4: Install Skupper in your namespaces](#step-4-install-skupper-in-your-namespaces) +* [Step 5: Check the status of your namespaces](#step-5-check-the-status-of-your-namespaces) +* [Step 6: Link your namespaces](#step-6-link-your-namespaces) +* [Step 7: Deploy the frontend and backend services](#step-7-deploy-the-frontend-and-backend-services) +* [Step 8: Expose the backend service](#step-8-expose-the-backend-service) +* [Step 9: Expose the frontend service](#step-9-expose-the-frontend-service) +* [Step 10: Test the application](#step-10-test-the-application) +* [Summary](#summary) +* [Cleaning up](#cleaning-up) +* [Next steps](#next-steps) + +## Overview + +This example is a very simple multi-service HTTP application that can +be deployed across multiple Kubernetes clusters using Skupper. + +It contains two services: + +* A backend service that exposes an `/api/hello` endpoint. It + returns greetings of the form `Hello from + ()`. + +* A frontend service that accepts HTTP requests, calls the backend + to fetch new greetings, and serves them to the user. + +With Skupper, you can place the backend in one cluster and the +frontend in another and maintain connectivity between the two +services without exposing the backend to the public internet. + + + +## Prerequisites + +* The `kubectl` command-line tool, version 1.15 or later + ([installation guide][install-kubectl]) + +* The `skupper` command-line tool, the latest version ([installation + guide][install-skupper]) + +* Access to at least one Kubernetes cluster, from any provider you + choose + +[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[install-skupper]: https://skupper.io/install/index.html + +## Step 1: Configure separate console sessions + +Skupper is designed for use with multiple namespaces, typically on +different clusters. The `skupper` command uses your +[kubeconfig][kubeconfig] and current context to select the namespace +where it operates. + +[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ + +Your kubeconfig is stored in a file in your home directory. The +`skupper` and `kubectl` commands use the `KUBECONFIG` environment +variable to locate it. + +A single kubeconfig supports only one active context per user. +Since you will be using multiple contexts at once in this +exercise, you need to create distinct kubeconfigs. + +Start a console session for each of your namespaces. Set the +`KUBECONFIG` environment variable to a different path in each +session. + +Console for _west_: + +~~~ shell +export KUBECONFIG=~/.kube/config-west +~~~ + +Console for _east_: + +~~~ shell +export KUBECONFIG=~/.kube/config-east +~~~ + +## Step 2: Access your clusters + +The methods for accessing your clusters vary by Kubernetes provider. +Find the instructions for your chosen providers and use them to +authenticate and configure access for each console session. See the +following links for more information: + +* [Minikube](https://skupper.io/start/minikube.html) +* [Amazon Elastic Kubernetes Service (EKS)](https://skupper.io/start/eks.html) +* [Azure Kubernetes Service (AKS)](https://skupper.io/start/aks.html) +* [Google Kubernetes Engine (GKE)](https://skupper.io/start/gke.html) +* [IBM Kubernetes Service](https://skupper.io/start/ibmks.html) +* [OpenShift](https://skupper.io/start/openshift.html) +* [More providers](https://kubernetes.io/partners/#kcsp) + +## Step 3: Set up your namespaces + +Use `kubectl create namespace` to create the namespaces you wish to +use (or use existing namespaces). Use `kubectl config set-context` to +set the current namespace for each session. + +Console for _west_: + +~~~ shell +kubectl create namespace west +kubectl config set-context --current --namespace west +~~~ + +Console for _east_: + +~~~ shell +kubectl create namespace east +kubectl config set-context --current --namespace east +~~~ + +## Step 4: Install Skupper in your namespaces + +The `skupper init` command installs the Skupper router and service +controller in the current namespace. Run the `skupper init` command +in each namespace. + +**Note:** If you are using Minikube, [you need to start `minikube +tunnel`][minikube-tunnel] before you install Skupper. + +[minikube-tunnel]: https://skupper.io/start/minikube.html#running-minikube-tunnel + +Console for _west_: + +~~~ shell +skupper init +~~~ + +Console for _east_: + +~~~ shell +skupper init +~~~ + +## Step 5: Check the status of your namespaces + +Use `skupper status` in each console to check that Skupper is +installed. + +Console for _west_: + +~~~ shell +skupper status +~~~ + +Console for _east_: + +~~~ shell +skupper status +~~~ + +You should see output like this for each namespace: + +~~~ +Skupper is enabled for namespace "" in interior mode. It is not connected to any other sites. It has no exposed services. +The site console url is: http://
:8080 +The credentials for internal console-auth mode are held in secret: 'skupper-console-users' +~~~ + +As you move through the steps below, you can use `skupper status` at +any time to check your progress. + +## Step 6: Link your namespaces + +Creating a link requires use of two `skupper` commands in conjunction, +`skupper token create` and `skupper link create`. + +The `skupper token create` command generates a secret token that +signifies permission to create a link. The token also carries the +link details. Then, in a remote namespace, The `skupper link create` +command uses the token to create a link to the namespace that +generated it. + +**Note:** The link token is truly a *secret*. Anyone who has the +token can link to your namespace. Make sure that only those you trust +have access to it. + +First, use `skupper token create` in one namespace to generate the +token. Then, use `skupper link create` in the other to create a link. + +Console for _west_: + +~~~ shell +skupper token create ~/west.token +~~~ + +Console for _east_: + +~~~ shell +skupper link create ~/west.token +skupper link status --wait 30 +~~~ + +If your console sessions are on different machines, you may need to +use `scp` or a similar tool to transfer the token. + +## Step 7: Deploy the frontend and backend services + +Use `kubectl create deployment` to deploy the frontend service +in `west` and the backend service in `east`. + +Console for _west_: + +~~~ shell +kubectl create deployment hello-world-frontend --image quay.io/skupper/hello-world-frontend +~~~ + +Console for _east_: + +~~~ shell +kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend +~~~ + +## Step 8: Expose the backend service + +We now have two namespaces linked to form a Skupper network, but +no services are exposed on it. Skupper uses the `skupper +expose` command to select a service from one namespace for +exposure on all the linked namespaces. + +Use `skupper expose` to expose the backend service to the +frontend service. + +Console for _east_: + +~~~ shell +skupper expose deployment/hello-world-backend --port 8080 +~~~ + +Sample output: + +~~~ +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello-world-backend ClusterIP 10.106.92.175 8080/TCP 1m31s +~~~ + +## Step 9: Expose the frontend service + +We have established connectivity between the two namespaces and +made the backend in `east` available to the frontend in `west`. +Before we can test the application, we need external access to +the frontend. + +Use `kubectl expose` with `--type LoadBalancer` to open network +access to the frontend service. Use `kubectl get services` to +check for the service and its external IP address. + +Console for _west_: + +~~~ shell +kubectl expose deployment/hello-world-frontend --port 8080 --type LoadBalancer +kubectl get services +~~~ + +Sample output: + +~~~ +$ kubectl expose deployment/hello-world-frontend --port 8080 --type LoadBalancer +service/hello-world-frontend exposed + +$ kubectl get services +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello-world-backend ClusterIP 10.102.112.121 8080/TCP 30s +hello-world-frontend LoadBalancer 10.98.170.106 10.98.170.106 8080:30787/TCP 2s +skupper LoadBalancer 10.101.101.208 10.101.101.208 8080:31494/TCP 82s +skupper-router LoadBalancer 10.110.252.252 10.110.252.252 55671:32111/TCP,45671:31193/TCP 86s +skupper-router-local ClusterIP 10.96.123.13 5671/TCP 86s +~~~ + +## Step 10: Test the application + +Look up the external URL and use `curl` to send a request. + +Console for _west_: + +~~~ shell +curl -f $(kubectl get service hello-world-frontend -o jsonpath='http://{.status.loadBalancer.ingress[0].ip}:8080/') +~~~ + +Sample output: + +~~~ +I am the frontend. The backend says 'Hello from hello-world-backend-869cd94f69-wh6zt (1)'. +~~~ + +**Note:** If the embedded `kubectl get` command fails to get the +IP address, you can find it manually by running `kubectl get +services` and looking up the external IP of the +`hello-world-frontend` service. + +## Summary + +This example locates the frontend and backend services in different +namespaces, on different clusters. Ordinarily, this means that they +have no way to communicate unless they are exposed to the public +internet. + +Introducing Skupper into each namespace allows us to create a virtual +application network that can connect services in different clusters. +Any service exposed on the application network is represented as a +local service in all of the linked namespaces. + +The backend service is located in `east`, but the frontend service +in `west` can "see" it as if it were local. When the frontend +sends a request to the backend, Skupper forwards the request to the +namespace where the backend is running and routes the response back to +the frontend. + + + +## Cleaning up + +To remove Skupper and the other resources from this exercise, use the +following commands. + +Console for _west_: + +~~~ shell +skupper delete +kubectl delete service/hello-world-frontend +kubectl delete deployment/hello-world-frontend +~~~ + +Console for _east_: + +~~~ shell +skupper delete +kubectl delete deployment/hello-world-backend +~~~ + +## Next steps + +Check out the other [examples][examples] on the Skupper website. diff --git a/subrepos/skewer/test-example/images/entities.svg b/subrepos/skewer/test-example/images/entities.svg new file mode 100644 index 0000000..6a1ab87 --- /dev/null +++ b/subrepos/skewer/test-example/images/entities.svg @@ -0,0 +1,3 @@ + + +
Frontend service
Frontend service
Skupper
Skupper
Kubernetes cluster 1
Kubernetes cluster 1
Namespace "west"
Namespace "west"
Namespace "east"
Namespace "east"
Kubernetes cluster 2
Kubernetes cluster 2
Backend service
Backend service
Skupper
Skupper
Public
network
Public<br/>network
diff --git a/subrepos/skewer/test-example/images/sequence.svg b/subrepos/skewer/test-example/images/sequence.svg new file mode 100644 index 0000000..20d27c1 --- /dev/null +++ b/subrepos/skewer/test-example/images/sequence.svg @@ -0,0 +1 @@ +westeastCurlFrontendSkupperSkupperBackendBackendGET /         GET /api/hello      GET /api/hello      GET /api/hello"Hello 1"      "Hello 1"      "Hello 1""Hello 1"          diff --git a/subrepos/skewer/test-example/images/sequence.txt b/subrepos/skewer/test-example/images/sequence.txt new file mode 100644 index 0000000..6d081ea --- /dev/null +++ b/subrepos/skewer/test-example/images/sequence.txt @@ -0,0 +1,22 @@ +participant Curl + +participantgroup #cce5ff eu-north +participant Frontend +participant "Skupper" as Skupper1 #lightgreen +end + +participantgroup #ffe6cc us-east +participant "Skupper" as Skupper2 #lightgreen +participant Backend #yellow +end + +abox over Skupper1 #yellow: Backend + +Curl->Frontend: GET / +Frontend->Skupper1: GET /api/hello +Skupper1->Skupper2: GET /api/hello +Skupper2->Backend: GET /api/hello +Skupper2<-Backend: "Hello 1" +Skupper1<-Skupper2: "Hello 1" +Frontend<-Skupper1: "Hello 1" +Curl<-Frontend: "Hello 1" diff --git a/subrepos/skewer/test-example/plano b/subrepos/skewer/test-example/plano new file mode 120000 index 0000000..8f8df51 --- /dev/null +++ b/subrepos/skewer/test-example/plano @@ -0,0 +1 @@ +../subrepos/plano/bin/plano \ No newline at end of file diff --git a/subrepos/skewer/test-example/python/plano.py b/subrepos/skewer/test-example/python/plano.py new file mode 120000 index 0000000..e93cddd --- /dev/null +++ b/subrepos/skewer/test-example/python/plano.py @@ -0,0 +1 @@ +../../subrepos/plano/python/plano.py \ No newline at end of file diff --git a/subrepos/skewer/test-example/python/skewer.py b/subrepos/skewer/test-example/python/skewer.py new file mode 120000 index 0000000..0534fb2 --- /dev/null +++ b/subrepos/skewer/test-example/python/skewer.py @@ -0,0 +1 @@ +../../python/skewer.py \ No newline at end of file diff --git a/subrepos/skewer/test-example/python/skewer.strings b/subrepos/skewer/test-example/python/skewer.strings new file mode 120000 index 0000000..2d6222d --- /dev/null +++ b/subrepos/skewer/test-example/python/skewer.strings @@ -0,0 +1 @@ +../../python/skewer.strings \ No newline at end of file diff --git a/subrepos/skewer/test-example/skewer.yaml b/subrepos/skewer/test-example/skewer.yaml new file mode 100644 index 0000000..c21466e --- /dev/null +++ b/subrepos/skewer/test-example/skewer.yaml @@ -0,0 +1,137 @@ +title: Skupper Hello World +subtitle: A minimal HTTP application deployed across Kubernetes clusters using Skupper +github_actions_url: https://github.com/skupperproject/skewer/actions/workflows/main.yaml +overview: | + This example is a very simple multi-service HTTP application that can + be deployed across multiple Kubernetes clusters using Skupper. + + It contains two services: + + * A backend service that exposes an `/api/hello` endpoint. It + returns greetings of the form `Hello from + ()`. + + * A frontend service that accepts HTTP requests, calls the backend + to fetch new greetings, and serves them to the user. + + With Skupper, you can place the backend in one cluster and the + frontend in another and maintain connectivity between the two + services without exposing the backend to the public internet. + + +prerequisites: !string prerequisites +sites: + west: + kubeconfig: ~/.kube/config-west + namespace: west + east: + kubeconfig: ~/.kube/config-east + namespace: east +steps: + - standard: configure_separate_console_sessions + - standard: access_your_clusters + - standard: set_up_your_namespaces + - standard: install_skupper_in_your_namespaces + - standard: check_the_status_of_your_namespaces + - title: Link your namespaces + preamble: !string link_your_namespaces_preamble + commands: + west: + - run: skupper token create ~/west.token + east: + - run: skupper link create ~/west.token + - run: skupper link status --wait 30 + postamble: !string link_your_namespaces_postamble + - title: Deploy the frontend and backend services + preamble: | + Use `kubectl create deployment` to deploy the frontend service + in `west` and the backend service in `east`. + commands: + west: + - run: kubectl create deployment hello-world-frontend --image quay.io/skupper/hello-world-frontend + await: [deployment/hello-world-frontend] + east: + - run: kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend + await: [deployment/hello-world-backend] + - title: Expose the backend service + preamble: | + We now have two namespaces linked to form a Skupper network, but + no services are exposed on it. Skupper uses the `skupper + expose` command to select a service from one namespace for + exposure on all the linked namespaces. + + Use `skupper expose` to expose the backend service to the + frontend service. + commands: + east: + - run: skupper expose deployment/hello-world-backend --port 8080 + sleep: 30 + output: | + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-world-backend ClusterIP 10.106.92.175 8080/TCP 1m31s + - title: Expose the frontend service + preamble: | + We have established connectivity between the two namespaces and + made the backend in `east` available to the frontend in `west`. + Before we can test the application, we need external access to + the frontend. + + Use `kubectl expose` with `--type LoadBalancer` to open network + access to the frontend service. Use `kubectl get services` to + check for the service and its external IP address. + commands: + west: + - run: kubectl expose deployment/hello-world-frontend --port 8080 --type LoadBalancer + await_external_ip: [service/hello-world-frontend] + output: | + service/hello-world-frontend exposed + - run: kubectl get services + output: | + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-world-backend ClusterIP 10.102.112.121 8080/TCP 30s + hello-world-frontend LoadBalancer 10.98.170.106 10.98.170.106 8080:30787/TCP 2s + skupper LoadBalancer 10.101.101.208 10.101.101.208 8080:31494/TCP 82s + skupper-router LoadBalancer 10.110.252.252 10.110.252.252 55671:32111/TCP,45671:31193/TCP 86s + skupper-router-local ClusterIP 10.96.123.13 5671/TCP 86s + - title: Test the application + preamble: | + Look up the external URL and use `curl` to send a request. + commands: + west: + - run: "curl -f $(kubectl get service hello-world-frontend -o jsonpath='http://{.status.loadBalancer.ingress[0].ip}:8080/')" + output: | + I am the frontend. The backend says 'Hello from hello-world-backend-869cd94f69-wh6zt (1)'. + postamble: | + **Note:** If the embedded `kubectl get` command fails to get the + IP address, you can find it manually by running `kubectl get + services` and looking up the external IP of the + `hello-world-frontend` service. +summary: | + This example locates the frontend and backend services in different + namespaces, on different clusters. Ordinarily, this means that they + have no way to communicate unless they are exposed to the public + internet. + + Introducing Skupper into each namespace allows us to create a virtual + application network that can connect services in different clusters. + Any service exposed on the application network is represented as a + local service in all of the linked namespaces. + + The backend service is located in `east`, but the frontend service + in `west` can "see" it as if it were local. When the frontend + sends a request to the backend, Skupper forwards the request to the + namespace where the backend is running and routes the response back to + the frontend. + + +cleaning_up: + preamble: !string cleaning_up_preamble + commands: + west: + - run: skupper delete + - run: kubectl delete service/hello-world-frontend + - run: kubectl delete deployment/hello-world-frontend + east: + - run: skupper delete + - run: kubectl delete deployment/hello-world-backend +next_steps: !string next_steps