-
Notifications
You must be signed in to change notification settings - Fork 265
Running a local cluster network operator for plugin development
You need somewhere to store your custom image that the AWS cluster instance can pull it from. The easiest options are Docker Hub or Quay.io. Create an account. Now create a repository called "ovn-kubernetes".
The typical development cni images are for ovn-kubernetes and sdn. ovn upstream is at [email protected]:ovn-org/ovn-kubernetes.git. Downstream is [email protected]:openshift/ovn-kubernetes.git The installer includes images built in downstream. sdn is [email protected]:openshift/sdn
git clone [email protected]:ovn-org/ovn-kubernetes.git- make your changes to the source tree
cd ovn-kubernetes/go-controllermake
Assuming you are working with the upstream ovn-kubernetes git repo and using Docker Hub, you can build an image from your ovn-kubernetes changes using the following steps:
cd dist/images/make fedoradocker tag ovn-kube-f:latest docker.io/(docker hub username)/ovn-kubernetes:latestdocker push docker.io/(docker hub username)/ovn-kubernetes:latest
- Create your combined pull secrets as described below
- Make your changes to your local openshift/ovn-kubernetes repo
- Push your changes to a new PR
- Click the 'details' link on the prow/images CI job in your PR
- Refresh until the build log appears, and look for the line
Tagged shared images from ocp/4.4:${component}, images will be pullable from registry.svc.ci.openshift.org/XXXXXXX/stable:${component} - Run
sudo podman pull --authfile /path/to/combined-pull-secrets registry.svc.ci.openshift.org/XXXXXXX/stable:ovn-kubernetessubstituting the rightci-op-xxxxnamespace from step 5. - Push the image to dockerhub with
sudo podman push
- Create your combined pull secrets as described below
- Make your changes to your local openshift/ovn-kubernetes repo
- Push your changes to a new PR
- Click the 'details' link on the prow/images CI job in your PR
- Refresh until the build log appears, and look for the line
Tagged shared images from ocp/4.4:${component}, images will be pullable from registry.svc.ci.openshift.org/XXXXXXX/stable:${component} - Run
oc registry login - Run
docker login --username $(oc whoami) --password $(oc whoami -t) registry.svc.ci.openshift.org - Run
docker pull registry.svc.ci.openshift.org/XXXXXXX/stable:ovn-kubernetessubstituting the rightci-op-xxxxnamespace from step 5. - Push the image to dockerhub with
docker push
The installer handles cluster creation in AWS, gcp, Azure and more.
- Go to https://openshift-release.svc.ci.openshift.org/ and find an installer in the "4.3" stream that is shown in green (eg, has passed CI). For example https://openshift-release.svc.ci.openshift.org/releasestream/4.3.0-0.ci/release/4.3.0-0.ci-2019-11-22-122829
The selected installer includes installer and client images for Linux and Mac. Always grab both an installer and the related client tools. You will frequently need to upgrade the installer and tools so put them in a convenient directory and add the directory to your $PATH (in ~/.baserc).
The nightly installers are built with the official OCP images and the ci builds are built with the images that are used in ci testing. Aside from the cluster-network-operator which we will be building and testing, all the images are part of the chosen installer. So if you need a specific image you need an installer that has it.
- Click the "Download the installer" link at the top of the page
- Wait a while
- Click the link for the "openshift-install-linux" tarball (eg "openshift-install-linux-4.2.0-0.ci-2019-06-25-135202.tar.gz") to download it
- Extract the tarball somewhere for later, like /tmp
Pull Secrets are specific to your user and allow your cluster to download the container images for all the OpenShift components. There are two pull secrets to get: the internal-only OpenShift CI ones (used by some installer bootstrap images) and the general OpenShift Developer secrets. The pull-secrets do change periodically. It's best to make sure you have the latest pull-secrets before attempting to spawn the cluster if it's been a few days since you got your pull-secrets updated.
The generic pull secrets are the same for all clusters. The "AWS" choice is a good as any other.
- go to the OpenShift portal secrets page
- Click the big "AWS" box
- Click the "Installer Provisioned Infrastructure" box
- Click the "Download Pull Secret" box and save the pull secret to
/tmp/pull-secret
- go to https://api.ci.openshift.org/console/
- Click the (?) in the upper right
- click on "Command Line Tools" in the menu that drops down
- Click the Clipboard icon at the end of the "oc login https://..." box
- Paste that command from the clipboard into a terminal and run it
- Run
oc registry login --to=/tmp/pull-secretto dump the pull secret to a file
Your CI pull secret is also now in /tmp/pull-secret combined with the generic OpenShift pull secrets that we'll feed to the installer.
Once we've combined the pull secret file you can pass them to the installer when it asks for them.
simply by copying the output of jq -c < /tmp/pull-secret
git clone [email protected]:openshift/cluster-network-operator.gitcd cluster-network-operatorhack/build-go.sh
Whenever you make changes in the Cluster Network Operator you must hack/build-go.sh to include the changes in testing.
https://mojo.redhat.com/docs/DOC-1081313#jive_content_id_Amazon_AWS
https://mojo.redhat.com/docs/DOC-1081313#jive_content_id_GCE The Service Account is required the first time you create a GCP cluster. It is cached for subsequent use.
The first time you run the installer it will ask a series of questions. One of the questions is the cluster host. When it is done there will be an 'install-config.yaml.bak.XXXXXX' file in the cluster temporary directory that you give to hack/run-locally.sh. You can copy this 'install-config.yaml.bak.XXXXX' file and pass it to hack/run-locally.sh to save steps next time.
The installer puts its work files in a supplied directory (which it will create if not present). Otherwise, it puts them in the current directory.
The default cni plugin is sdn. The -n option can be used to select ovn.
- Run
hack/run-locally.sh -c (cluster temp dir) -n ovn -m docker.io/(docker hub username)/ovn-kubernetes:latestand substitute as necessary. - hack/run-locally.sh runs the openshift-installer for you, so now you get to answer some questions
-
SSH Public Key- this allows you to SSH to the bootstrap node for debugging; pick one. -
Platform- pick AWS or gcp -
Region- pick something close to you (the default selection tends to get oversubscribed) -
Base Domain- pick devcluster.openshift.com -
Cluster Name- pick something unique like "dcbw-ovntest". (It should start with your redhat username). -
Pull Secret- Paste the contents of/tmp/combined-pull-secretsthat we created earlier - Your install-config.yaml.bak.XXXXX file will now be in your (cluster temp dir). Copy this file somewhere for future use.
It is convenient to put the kubeconfig into the KUBECONFIG environment variable. This eliminates the need for the --config option. export KUBECONFIG=/(cluster temp dir)/auth/kubeconfig
- You can
tail -f /(cluster temp dir)/.openshift-install.logto watch progress. After it says " - List pods:
oc get pods --all-namespacesand look for ovn-kubernetes related pods. You should see them running. - Get logs from ovn-kubernetes:
/path/to/oc --config /(cluster temp dir)/auth/kubeconfig logs -n openshift-ovn-kubernetes (ovn-kubernetes pod name)and it will yell at you to pick a container. Pick one and repeat the previous command but add-c (container name)to the end to get the logs.
AWS cluster:
You can SSH to the bootstrap node if things don't seem to be coming up after a while. Look in /(cluster temp dir)/terraform.tfstate for the aws_instance type with the name "bootstrap". About 20 lines below you'll see "public_ip"; copy that IP and ssh -i /path/to/openshift-dev.pem core@(public IP). You get openshift-dev.pem from the shared secrets repository. Then journalctl -b -f -u bootkube.service and take a look at the errors.
GCP cluster:
You can SSH to the bootstrap node if things don't seem to be coming up after a while. Look in /(cluster temp dir)/terraform.tfstate for the first "access_config": a few lines below is "nat_ip": "35.221.35.206" which is the public IP address of the bootstrap node. ssh -i ~/.ssh/(rsa key)/ core@(public IP)`
On the bootstrap node, oc get nodes -o wide shows the internal IP of the nodes. Copy the rsa key you selected in the cluster create into ~/.ssh
From the bootstrap node you can ssh into the master nodes ssh -i \(rsa key)\ core@\(internal node ip)\
- Ctrl+C hack/run-locally.sh
- Run
/path/to/openshift-install --dir (cluster temp dir) destroy cluster - Wait a long time
Since you cached the install-config you can save yourself a lot of time. Now all you need to do is:
- Run
PATH=/path/to/oc:$PATH hack/run-locally.sh -c (cluster temp dir) -i /path/to/openshift-install -n ovn -m docker.io/(docker hub username)/ovn-kubernetes:latest -f /path/to/install-config.yamland substitute as necessary.