@@ -125,7 +125,7 @@ Once you have configured the options above on all the GPU nodes in your
125
125
cluster, you can enable GPU support by deploying the following Daemonset:
126
126
127
127
``` shell
128
- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0-rc.1 /nvidia-device-plugin.yml
128
+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0-rc.2 /nvidia-device-plugin.yml
129
129
```
130
130
131
131
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -560,11 +560,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
560
560
$ helm repo update
561
561
```
562
562
563
- Then verify that the latest release (` v0.15.0-rc.1 ` ) of the plugin is available:
563
+ Then verify that the latest release (` v0.15.0-rc.2 ` ) of the plugin is available:
564
564
```
565
565
$ helm search repo nvdp --devel
566
566
NAME CHART VERSION APP VERSION DESCRIPTION
567
- nvdp/nvidia-device-plugin 0.15.0-rc.1 0.15.0-rc.1 A Helm chart for ...
567
+ nvdp/nvidia-device-plugin 0.15.0-rc.2 0.15.0-rc.2 A Helm chart for ...
568
568
```
569
569
570
570
Once this repo is updated, you can begin installing packages from it to deploy
@@ -575,7 +575,7 @@ The most basic installation command without any options is then:
575
575
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
576
576
--namespace nvidia-device-plugin \
577
577
--create-namespace \
578
- --version 0.15.0-rc.1
578
+ --version 0.15.0-rc.2
579
579
```
580
580
581
581
** Note:** You only need the to pass the ` --devel ` flag to ` helm search repo `
@@ -584,7 +584,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
584
584
585
585
### Configuring the device plugin's ` helm ` chart
586
586
587
- The ` helm ` chart for the latest release of the plugin (` v0.15.0-rc.1 ` ) includes
587
+ The ` helm ` chart for the latest release of the plugin (` v0.15.0-rc.2 ` ) includes
588
588
a number of customizable values.
589
589
590
590
Prior to ` v0.12.0 ` the most commonly used values were those that had direct
@@ -594,7 +594,7 @@ case of the original values is then to override an option from the `ConfigMap`
594
594
if desired. Both methods are discussed in more detail below.
595
595
596
596
The full set of values that can be set are found here:
597
- [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.1 /deployments/helm/nvidia-device-plugin/values.yaml ) .
597
+ [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.2 /deployments/helm/nvidia-device-plugin/values.yaml ) .
598
598
599
599
#### Passing configuration to the plugin via a ` ConfigMap ` .
600
600
633
633
And deploy the device plugin via helm (pointing it at this config file and giving it a name):
634
634
```
635
635
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
636
- --version=0.15.0-rc.1 \
636
+ --version=0.15.0-rc.2 \
637
637
--namespace nvidia-device-plugin \
638
638
--create-namespace \
639
639
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -655,7 +655,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
655
655
```
656
656
```
657
657
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
658
- --version=0.15.0-rc.1 \
658
+ --version=0.15.0-rc.2 \
659
659
--namespace nvidia-device-plugin \
660
660
--create-namespace \
661
661
--set config.name=nvidia-plugin-configs
683
683
And redeploy the device plugin via helm (pointing it at both configs with a specified default).
684
684
```
685
685
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
686
- --version=0.15.0-rc.1 \
686
+ --version=0.15.0-rc.2 \
687
687
--namespace nvidia-device-plugin \
688
688
--create-namespace \
689
689
--set config.default=config0 \
@@ -702,7 +702,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
702
702
```
703
703
```
704
704
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
705
- --version=0.15.0-rc.1 \
705
+ --version=0.15.0-rc.2 \
706
706
--namespace nvidia-device-plugin \
707
707
--create-namespace \
708
708
--set config.default=config0 \
@@ -785,7 +785,7 @@ chart values that are commonly overridden are:
785
785
```
786
786
787
787
Please take a look in the
788
- [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.1 /deployments/helm/nvidia-device-plugin/values.yaml )
788
+ [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.2 /deployments/helm/nvidia-device-plugin/values.yaml )
789
789
file to see the full set of overridable parameters for the device plugin.
790
790
791
791
Examples of setting these options include:
@@ -794,7 +794,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
794
794
100ms of CPU time and a limit of 512MB of memory.
795
795
``` shell
796
796
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
797
- --version=0.15.0-rc.1 \
797
+ --version=0.15.0-rc.2 \
798
798
--namespace nvidia-device-plugin \
799
799
--create-namespace \
800
800
--set compatWithCPUManager=true \
@@ -805,7 +805,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
805
805
Enabling compatibility with the ` CPUManager ` and the ` mixed ` ` migStrategy `
806
806
``` shell
807
807
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
808
- --version=0.15.0-rc.1 \
808
+ --version=0.15.0-rc.2 \
809
809
--namespace nvidia-device-plugin \
810
810
--create-namespace \
811
811
--set compatWithCPUManager=true \
@@ -824,7 +824,7 @@ Discovery to perform this labeling.
824
824
To enable it, simply set ` gfd.enabled=true ` during helm install.
825
825
```
826
826
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
827
- --version=0.15.0-rc.1 \
827
+ --version=0.15.0-rc.2 \
828
828
--namespace nvidia-device-plugin \
829
829
--create-namespace \
830
830
--set gfd.enabled=true
@@ -930,31 +930,31 @@ Using the default values for the flags:
930
930
$ helm upgrade -i nvdp \
931
931
--namespace nvidia-device-plugin \
932
932
--create-namespace \
933
- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0-rc.1 .tgz
933
+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0-rc.2 .tgz
934
934
```
935
935
-->
936
936
## Building and Running Locally
937
937
938
938
The next sections are focused on building the device plugin locally and running it.
939
939
It is intended purely for development and testing, and not required by most users.
940
- It assumes you are pinning to the latest release tag (i.e. ` v0.15.0-rc.1 ` ), but can
940
+ It assumes you are pinning to the latest release tag (i.e. ` v0.15.0-rc.2 ` ), but can
941
941
easily be modified to work with any available tag or branch.
942
942
943
943
### With Docker
944
944
945
945
#### Build
946
946
Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
947
947
``` shell
948
- $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.1
949
- $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.1 nvcr.io/nvidia/k8s-device-plugin:devel
948
+ $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2
949
+ $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2 nvcr.io/nvidia/k8s-device-plugin:devel
950
950
```
951
951
952
952
Option 2, build without cloning the repository:
953
953
``` shell
954
954
$ docker build \
955
955
-t nvcr.io/nvidia/k8s-device-plugin:devel \
956
956
-f deployments/container/Dockerfile.ubuntu \
957
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0-rc.1
957
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0-rc.2
958
958
```
959
959
960
960
Option 3, if you want to modify the code:
0 commit comments