@@ -6,9 +6,45 @@ title: Spark on Kubernetes Integration Tests
66# Running the Kubernetes Integration Tests
77
88Note that the integration test framework is currently being heavily revised and
9- is subject to change.
9+ is subject to change. Note that currently the integration tests only run with Java 8.
1010
11- Note that currently the integration tests only run with Java 8.
11+ As shorthand to run the tests against any given cluster, you can use the ` e2e/runner.sh ` script.
12+ The script assumes that you have a functioning Kubernetes cluster (1.6+) with kubectl
13+ configured to access it. The master URL of the currently configured cluster on your
14+ machine can be discovered as follows:
15+
16+ ```
17+ $ kubectl cluster-info
18+
19+ Kubernetes master is running at https://xyz
20+ ```
21+
22+ If you want to use a local [ minikube] ( https://github.com/kubernetes/minikube ) cluster,
23+ the minimum tested version is 0.23.0, with the kube-dns addon enabled
24+ and the recommended configuration is 3 CPUs and 4G of memory. There is also a wrapper
25+ script for running on minikube, ` e2e/e2e-minikube.sh ` for testing the apache/spark repo
26+ in specific.
27+
28+ ```
29+ $ minikube start --memory 4000 --cpus 3
30+ ```
31+
32+ If you're using a non-local cluster, you must provide an image repository
33+ which you have write access to, using the ` -i ` option, in order to store docker images
34+ generated during the test.
35+
36+ Example usages of the script:
37+
38+ ```
39+ $ ./e2e/runner.sh -m https://xyz -i docker.io/foxish -d cloud
40+ $ ./e2e/runner.sh -m https://xyz -i test -d minikube
41+ $ ./e2e/runner.sh -m https://xyz -i test -r https://github.com/my-spark/spark -d minikube
42+
43+ ```
44+
45+ # Detailed Documentation
46+
47+ ## Running the tests using maven
1248
1349Running the integration tests requires a Spark distribution package tarball that
1450contains Spark jars, submission clients, etc. You can download a tarball from
@@ -40,7 +76,7 @@ $ mvn clean integration-test \
4076 -Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz
4177```
4278
43- # Running against an arbitrary cluster
79+ ## Running against an arbitrary cluster
4480
4581In order to run against any cluster, use the following:
4682``` sh
@@ -49,7 +85,7 @@ $ mvn clean integration-test \
4985 -DextraScalaTestArgs=" -Dspark.kubernetes.test.master=k8s://https://<master> -Dspark.docker.test.driverImage=<driver-image> -Dspark.docker.test.executorImage=<executor-image>"
5086```
5187
52- # Preserve the Minikube VM
88+ ## Preserve the Minikube VM
5389
5490The integration tests make use of
5591[ Minikube] ( https://github.com/kubernetes/minikube ) , which fires up a virtual
@@ -64,7 +100,7 @@ $ mvn clean integration-test \
64100 -DextraScalaTestArgs=-Dspark.docker.test.persistMinikube=true
65101```
66102
67- # Reuse the previous Docker images
103+ ## Reuse the previous Docker images
68104
69105The integration tests build a number of Docker images, which takes some time.
70106By default, the images are built every time the tests run. You may want to skip
0 commit comments