@@ -162,7 +162,7 @@ Once you have configured the options above on all the GPU nodes in your
162
162
cluster, you can enable GPU support by deploying the following Daemonset:
163
163
164
164
``` shell
165
- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.4 /nvidia-device-plugin.yml
165
+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0-rc.1 /nvidia-device-plugin.yml
166
166
```
167
167
168
168
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -590,11 +590,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
590
590
$ helm repo update
591
591
```
592
592
593
- Then verify that the latest release (` v0.14.4 ` ) of the plugin is available:
593
+ Then verify that the latest release (` v0.15.0-rc.1 ` ) of the plugin is available:
594
594
```
595
595
$ helm search repo nvdp --devel
596
596
NAME CHART VERSION APP VERSION DESCRIPTION
597
- nvdp/nvidia-device-plugin 0.14.4 0.14.4 A Helm chart for ...
597
+ nvdp/nvidia-device-plugin 0.15.0-rc.1 0.15.0-rc.1 A Helm chart for ...
598
598
```
599
599
600
600
Once this repo is updated, you can begin installing packages from it to deploy
@@ -605,7 +605,7 @@ The most basic installation command without any options is then:
605
605
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
606
606
--namespace nvidia-device-plugin \
607
607
--create-namespace \
608
- --version 0.14.4
608
+ --version 0.15.0-rc.1
609
609
```
610
610
611
611
** Note:** You only need the to pass the ` --devel ` flag to ` helm search repo `
@@ -614,7 +614,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
614
614
615
615
### Configuring the device plugin's ` helm ` chart
616
616
617
- The ` helm ` chart for the latest release of the plugin (` v0.14.4 ` ) includes
617
+ The ` helm ` chart for the latest release of the plugin (` v0.15.0-rc.1 ` ) includes
618
618
a number of customizable values.
619
619
620
620
Prior to ` v0.12.0 ` the most commonly used values were those that had direct
@@ -624,7 +624,7 @@ case of the original values is then to override an option from the `ConfigMap`
624
624
if desired. Both methods are discussed in more detail below.
625
625
626
626
The full set of values that can be set are found here:
627
- [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.4 /deployments/helm/nvidia-device-plugin/values.yaml ) .
627
+ [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.1 /deployments/helm/nvidia-device-plugin/values.yaml ) .
628
628
629
629
#### Passing configuration to the plugin via a ` ConfigMap ` .
630
630
663
663
And deploy the device plugin via helm (pointing it at this config file and giving it a name):
664
664
```
665
665
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
666
- --version=0.14.4 \
666
+ --version=0.15.0-rc.1 \
667
667
--namespace nvidia-device-plugin \
668
668
--create-namespace \
669
669
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -685,7 +685,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
685
685
```
686
686
```
687
687
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
688
- --version=0.14.4 \
688
+ --version=0.15.0-rc.1 \
689
689
--namespace nvidia-device-plugin \
690
690
--create-namespace \
691
691
--set config.name=nvidia-plugin-configs
713
713
And redeploy the device plugin via helm (pointing it at both configs with a specified default).
714
714
```
715
715
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
716
- --version=0.14.4 \
716
+ --version=0.15.0-rc.1 \
717
717
--namespace nvidia-device-plugin \
718
718
--create-namespace \
719
719
--set config.default=config0 \
@@ -732,7 +732,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
732
732
```
733
733
```
734
734
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
735
- --version=0.14.4 \
735
+ --version=0.15.0-rc.1 \
736
736
--namespace nvidia-device-plugin \
737
737
--create-namespace \
738
738
--set config.default=config0 \
@@ -815,7 +815,7 @@ chart values that are commonly overridden are:
815
815
```
816
816
817
817
Please take a look in the
818
- [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.4 /deployments/helm/nvidia-device-plugin/values.yaml )
818
+ [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.1 /deployments/helm/nvidia-device-plugin/values.yaml )
819
819
file to see the full set of overridable parameters for the device plugin.
820
820
821
821
Examples of setting these options include:
@@ -824,7 +824,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
824
824
100ms of CPU time and a limit of 512MB of memory.
825
825
``` shell
826
826
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
827
- --version=0.14.4 \
827
+ --version=0.15.0-rc.1 \
828
828
--namespace nvidia-device-plugin \
829
829
--create-namespace \
830
830
--set compatWithCPUManager=true \
@@ -835,7 +835,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
835
835
Enabling compatibility with the ` CPUManager ` and the ` mixed ` ` migStrategy `
836
836
``` shell
837
837
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
838
- --version=0.14.4 \
838
+ --version=0.15.0-rc.1 \
839
839
--namespace nvidia-device-plugin \
840
840
--create-namespace \
841
841
--set compatWithCPUManager=true \
@@ -854,7 +854,7 @@ Discovery to perform this labeling.
854
854
To enable it, simply set ` gfd.enabled=true ` during helm install.
855
855
```
856
856
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
857
- --version=0.14.4 \
857
+ --version=0.15.0-rc.1 \
858
858
--namespace nvidia-device-plugin \
859
859
--create-namespace \
860
860
--set gfd.enabled=true
@@ -960,31 +960,31 @@ Using the default values for the flags:
960
960
$ helm upgrade -i nvdp \
961
961
--namespace nvidia-device-plugin \
962
962
--create-namespace \
963
- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.14.4 .tgz
963
+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0-rc.1 .tgz
964
964
```
965
965
-->
966
966
## Building and Running Locally
967
967
968
968
The next sections are focused on building the device plugin locally and running it.
969
969
It is intended purely for development and testing, and not required by most users.
970
- It assumes you are pinning to the latest release tag (i.e. ` v0.14.4 ` ), but can
970
+ It assumes you are pinning to the latest release tag (i.e. ` v0.15.0-rc.1 ` ), but can
971
971
easily be modified to work with any available tag or branch.
972
972
973
973
### With Docker
974
974
975
975
#### Build
976
976
Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
977
977
``` shell
978
- $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.14.4
979
- $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.14.4 nvcr.io/nvidia/k8s-device-plugin:devel
978
+ $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.1
979
+ $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.1 nvcr.io/nvidia/k8s-device-plugin:devel
980
980
```
981
981
982
982
Option 2, build without cloning the repository:
983
983
``` shell
984
984
$ docker build \
985
985
-t nvcr.io/nvidia/k8s-device-plugin:devel \
986
986
-f deployments/container/Dockerfile.ubuntu \
987
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.14.4
987
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0-rc.1
988
988
```
989
989
990
990
Option 3, if you want to modify the code:
0 commit comments