@@ -147,7 +147,7 @@ Once you have configured the options above on all the GPU nodes in your
147
147
cluster, you can enable GPU support by deploying the following Daemonset:
148
148
149
149
``` shell
150
- kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.17.1 /deployments/static/nvidia-device-plugin.yml
150
+ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.17.2 /deployments/static/nvidia-device-plugin.yml
151
151
```
152
152
153
153
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -639,12 +639,12 @@ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
639
639
helm repo update
640
640
` ` `
641
641
642
- Then verify that the latest release (`v0.17.1 `) of the plugin is available :
642
+ Then verify that the latest release (`v0.17.2 `) of the plugin is available :
643
643
644
644
` ` ` shell
645
645
$ helm search repo nvdp --devel
646
646
NAME CHART VERSION APP VERSION DESCRIPTION
647
- nvdp/nvidia-device-plugin 0.17.1 0.17.1 A Helm chart for ...
647
+ nvdp/nvidia-device-plugin 0.17.2 0.17.2 A Helm chart for ...
648
648
` ` `
649
649
650
650
Once this repo is updated, you can begin installing packages from it to deploy
@@ -656,7 +656,7 @@ The most basic installation command without any options is then:
656
656
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
657
657
--namespace nvidia-device-plugin \
658
658
--create-namespace \
659
- --version 0.17.1
659
+ --version 0.17.2
660
660
` ` `
661
661
662
662
**Note:** You only need the to pass the `--devel` flag to `helm search repo`
@@ -665,7 +665,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
665
665
666
666
# ## Configuring the device plugin's `helm` chart
667
667
668
- The `helm` chart for the latest release of the plugin (`v0.17.1 `) includes
668
+ The `helm` chart for the latest release of the plugin (`v0.17.2 `) includes
669
669
a number of customizable values.
670
670
671
671
Prior to `v0.12.0` the most commonly used values were those that had direct
@@ -675,7 +675,7 @@ case of the original values is then to override an option from the `ConfigMap`
675
675
if desired. Both methods are discussed in more detail below.
676
676
677
677
The full set of values that can be set are found here :
678
- [here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.1 /deployments/helm/nvidia-device-plugin/values.yaml).
678
+ [here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.2 /deployments/helm/nvidia-device-plugin/values.yaml).
679
679
680
680
# ### Passing configuration to the plugin via a `ConfigMap`
681
681
@@ -718,7 +718,7 @@ And deploy the device plugin via helm (pointing it at this config file and givin
718
718
719
719
` ` ` shell
720
720
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
721
- --version=0.17.1 \
721
+ --version=0.17.2 \
722
722
--namespace nvidia-device-plugin \
723
723
--create-namespace \
724
724
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -743,7 +743,7 @@ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
743
743
744
744
` ` ` shell
745
745
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
746
- --version=0.17.1 \
746
+ --version=0.17.2 \
747
747
--namespace nvidia-device-plugin \
748
748
--create-namespace \
749
749
--set config.name=nvidia-plugin-configs
@@ -773,7 +773,7 @@ And redeploy the device plugin via helm (pointing it at both configs with a spec
773
773
774
774
` ` ` shell
775
775
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
776
- --version=0.17.1 \
776
+ --version=0.17.2 \
777
777
--namespace nvidia-device-plugin \
778
778
--create-namespace \
779
779
--set config.default=config0 \
@@ -795,7 +795,7 @@ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
795
795
796
796
` ` ` shell
797
797
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
798
- --version=0.17.1 \
798
+ --version=0.17.2 \
799
799
--namespace nvidia-device-plugin \
800
800
--create-namespace \
801
801
--set config.default=config0 \
@@ -881,7 +881,7 @@ runtimeClassName:
881
881
` ` `
882
882
883
883
Please take a look in the
884
- [`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.1 /deployments/helm/nvidia-device-plugin/values.yaml)
884
+ [`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.2 /deployments/helm/nvidia-device-plugin/values.yaml)
885
885
file to see the full set of overridable parameters for the device plugin.
886
886
887
887
Examples of setting these options include :
@@ -891,7 +891,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
891
891
892
892
` ` ` shell
893
893
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
894
- --version=0.17.1 \
894
+ --version=0.17.2 \
895
895
--namespace nvidia-device-plugin \
896
896
--create-namespace \
897
897
--set compatWithCPUManager=true \
@@ -903,7 +903,7 @@ Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`.
903
903
904
904
` ` ` shell
905
905
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
906
- --version=0.17.1 \
906
+ --version=0.17.2 \
907
907
--namespace nvidia-device-plugin \
908
908
--create-namespace \
909
909
--set compatWithCPUManager=true \
@@ -922,7 +922,7 @@ To enable it, simply set `gfd.enabled=true` during helm install.
922
922
923
923
` ` ` shell
924
924
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
925
- --version=0.17.1 \
925
+ --version=0.17.2 \
926
926
--namespace nvidia-device-plugin \
927
927
--create-namespace \
928
928
--set gfd.enabled=true
@@ -980,13 +980,13 @@ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
980
980
helm repo update
981
981
```
982
982
983
- Then verify that the latest release (` v0.17.1 ` ) of the plugin is available
983
+ Then verify that the latest release (` v0.17.2 ` ) of the plugin is available
984
984
(Note that this includes the GFD chart):
985
985
986
986
``` shell
987
987
helm search repo nvdp --devel
988
988
NAME CHART VERSION APP VERSION DESCRIPTION
989
- nvdp/nvidia-device-plugin 0.17.1 0.17.1 A Helm chart for ...
989
+ nvdp/nvidia-device-plugin 0.17.2 0.17.2 A Helm chart for ...
990
990
```
991
991
992
992
Once this repo is updated, you can begin installing packages from it to deploy
@@ -996,7 +996,7 @@ The most basic installation command without any options is then:
996
996
997
997
``` shell
998
998
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
999
- --version 0.17.1 \
999
+ --version 0.17.2 \
1000
1000
--namespace gpu-feature-discovery \
1001
1001
--create-namespace \
1002
1002
--set devicePlugin.enabled=false
@@ -1007,7 +1007,7 @@ the default namespace.
1007
1007
1008
1008
``` shell
1009
1009
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
1010
- --version=0.17.1 \
1010
+ --version=0.17.2 \
1011
1011
--set allowDefaultNamespace=true \
1012
1012
--set nfd.enabled=false \
1013
1013
--set migStrategy=mixed \
@@ -1031,14 +1031,14 @@ Using the default values for the flags:
1031
1031
helm upgrade -i nvdp \
1032
1032
--namespace nvidia-device-plugin \
1033
1033
--create-namespace \
1034
- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.17.1 .tgz
1034
+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.17.2 .tgz
1035
1035
```
1036
1036
1037
1037
## Building and Running Locally
1038
1038
1039
1039
The next sections are focused on building the device plugin locally and running it.
1040
1040
It is intended purely for development and testing, and not required by most users.
1041
- It assumes you are pinning to the latest release tag (i.e. ` v0.17.1 ` ), but can
1041
+ It assumes you are pinning to the latest release tag (i.e. ` v0.17.2 ` ), but can
1042
1042
easily be modified to work with any available tag or branch.
1043
1043
1044
1044
### With Docker
@@ -1048,8 +1048,8 @@ easily be modified to work with any available tag or branch.
1048
1048
Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
1049
1049
1050
1050
``` shell
1051
- docker pull nvcr.io/nvidia/k8s-device-plugin:v0.17.1
1052
- docker tag nvcr.io/nvidia/k8s-device-plugin:v0.17.1 nvcr.io/nvidia/k8s-device-plugin:devel
1051
+ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.17.2
1052
+ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.17.2 nvcr.io/nvidia/k8s-device-plugin:devel
1053
1053
```
1054
1054
1055
1055
Option 2, build without cloning the repository:
@@ -1058,7 +1058,7 @@ Option 2, build without cloning the repository:
1058
1058
docker build \
1059
1059
-t nvcr.io/nvidia/k8s-device-plugin:devel \
1060
1060
-f deployments/container/Dockerfile.ubuntu \
1061
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.17.1
1061
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.17.2
1062
1062
```
1063
1063
1064
1064
Option 3, if you want to modify the code:
0 commit comments