Categorygithub.com/NVIDIA/gpu-feature-discovery
repository
0.8.2
Repository: https://github.com/nvidia/gpu-feature-discovery.git
Documentation: pkg.go.dev

# Packages

No description provided by the author

# README

NVIDIA GPU feature discovery

Go Report Card License

Table of Contents

Overview

NVIDIA GPU Feature Discovery for Kubernetes is a software component that allows you to automatically generate labels for the set of GPUs available on a node. It leverages the Node Feature Discovery to perform this labeling.

Beta Version

This tool should be considered beta until it reaches v1.0.0. As such, we may break the API before reaching v1.0.0, but we will setup a deprecation policy to ease the transition.

Prerequisites

The list of prerequisites for running the NVIDIA GPU Feature Discovery is described below:

  • nvidia-docker version > 2.0 (see how to install and it's prerequisites)
  • docker configured with nvidia as the default runtime.
  • Kubernetes version >= 1.10
  • NVIDIA device plugin for Kubernetes (see how to setup)
  • NFD deployed on each node you want to label with the local source configured

Quick Start

The following assumes you have at least one node in your cluster with GPUs and the standard NVIDIA drivers have already been installed on it.

Node Feature Discovery (NFD)

The first step is to make sure that Node Feature Discovery is running on every node you want to label. NVIDIA GPU Feature Discovery use the local source so be sure to mount volumes. See https://github.com/kubernetes-sigs/node-feature-discovery for more details.

You also need to configure the Node Feature Discovery to only expose vendor IDs in the PCI source. To do so, please refer to the Node Feature Discovery documentation.

The following command will deploy NFD with the minimum required set of parameters to run gpu-feature-discovery.

kubectl apply -f https://raw.githubusercontent.com/NVIDIA/gpu-feature-discovery/v0.8.2/deployments/static/nfd.yaml

Note: This is a simple static daemonset meant to demonstrate the basic features required of node-feature-discovery in order to successfully run gpu-feature-discovery. Please see the instructions below for Deployment via helm when deploying in a production setting.

Preparing your GPU Nodes

Be sure that nvidia-docker2 is installed on your GPU nodes and Docker default runtime is set to nvidia. See https://github.com/NVIDIA/nvidia-docker/wiki/Advanced-topics#default-runtime.

Deploy NVIDIA GPU Feature Discovery (GFD)

The next step is to run NVIDIA GPU Feature Discovery on each node as a Daemonset or as a Job.

Daemonset

kubectl apply -f https://raw.githubusercontent.com/NVIDIA/gpu-feature-discovery/v0.8.2/deployments/static/gpu-feature-discovery-daemonset.yaml

Note: This is a simple static daemonset meant to demonstrate the basic features required of gpu-feature-discovery. Please see the instructions below for Deployment via helm when deploying in a production setting.

Job

You must change the NODE_NAME value in the template to match the name of the node you want to label:

$ export NODE_NAME=<your-node-name>
$ curl https://raw.githubusercontent.com/NVIDIA/gpu-feature-discovery/v0.8.2/deployments/static/gpu-feature-discovery-job.yaml.template \
    | sed "s/NODE_NAME/${NODE_NAME}/" > gpu-feature-discovery-job.yaml
$ kubectl apply -f gpu-feature-discovery-job.yaml

Note: This method should only be used for testing and not deployed in a productions setting.

Verifying Everything Works

With both NFD and GFD deployed and running, you should now be able to see GPU related labels appearing on any nodes that have GPUs installed on them.

$ kubectl get nodes -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    ...

    labels:
      nvidia.com/cuda.driver.major: "455"
      nvidia.com/cuda.driver.minor: "06"
      nvidia.com/cuda.driver.rev: ""
      nvidia.com/cuda.runtime.major: "11"
      nvidia.com/cuda.runtime.minor: "1"
      nvidia.com/gpu.compute.major: "8"
      nvidia.com/gpu.compute.minor: "0"
      nvidia.com/gfd.timestamp: "1594644571"
      nvidia.com/gpu.count: "1"
      nvidia.com/gpu.family: ampere
      nvidia.com/gpu.machine: NVIDIA DGX-2H
      nvidia.com/gpu.memory: "39538"
      nvidia.com/gpu.product: A100-SXM4-40GB
      ...
...

The GFD Command line interface

Available options:

gpu-feature-discovery:
Usage:
  gpu-feature-discovery [--fail-on-init-error=<bool>] [--mig-strategy=<strategy>] [--oneshot | --sleep-interval=<seconds>] [--no-timestamp] [--output-file=<file> | -o <file>]
  gpu-feature-discovery -h | --help
  gpu-feature-discovery --version

Options:
  -h --help                       Show this help message and exit
  --version                       Display version and exit
  --oneshot                       Label once and exit
  --no-timestamp                  Do not add timestamp to the labels
  --fail-on-init-error=<bool>     Fail if there is an error during initialization of any label sources [Default: true]
  --sleep-interval=<seconds>      Time to sleep between labeling [Default: 60s]
  --mig-strategy=<strategy>       Strategy to use for MIG-related labels [Default: none]
  -o <file> --output-file=<file>  Path to output file
                                  [Default: /etc/kubernetes/node-feature-discovery/features.d/gfd]

Arguments:
  <strategy>: none | single | mixed

You can also use environment variables:

Env VariableOptionExample
GFD_FAIL_ON_INIT_ERROR--fail-on-init-errortrue
GFD_MIG_STRATEGY--mig-strategynone
GFD_ONESHOT--oneshotTRUE
GFD_NO_TIMESTAMP--no-timestampTRUE
GFD_OUTPUT_FILE--output-fileoutput
GFD_SLEEP_INTERVAL--sleep-interval10s

Environment variables override the command line options if they conflict.

Generated Labels

This is the list of the labels generated by NVIDIA GPU Feature Discovery and their meaning:

Label NameValue TypeMeaningExample
nvidia.com/cuda.driver.majorIntegerMajor of the version of NVIDIA driver418
nvidia.com/cuda.driver.minorIntegerMinor of the version of NVIDIA driver30
nvidia.com/cuda.driver.revIntegerRevision of the version of NVIDIA driver40
nvidia.com/cuda.runtime.majorIntegerMajor of the version of CUDA10
nvidia.com/cuda.runtime.minorIntegerMinor of the version of CUDA1
nvidia.com/gfd.timestampIntegerTimestamp of the generated labels (optional)1555019244
nvidia.com/gpu.compute.majorIntegerMajor of the compute capabilities3
nvidia.com/gpu.compute.minorIntegerMinor of the compute capabilities3
nvidia.com/gpu.countIntegerNumber of GPUs2
nvidia.com/gpu.familyStringArchitecture family of the GPUkepler
nvidia.com/gpu.machineStringMachine typeDGX-1
nvidia.com/gpu.memoryIntegerMemory of the GPU in Mb2048
nvidia.com/gpu.productStringModel of the GPUGeForce-GT-710

Depending on the MIG strategy used, the following set of labels may also be available (or override the default values for some of the labels listed above):

MIG 'single' strategy

With this strategy, the single nvidia.com/gpu label is overloaded to provide information about MIG devices on the node, rather than full GPUs. This assumes all GPUs on the node have been divided into identical partitions of the same size. The example below shows info for a system with 8 full GPUs, each of which is partitioned into 7 equal sized MIG devices (56 total).

Label NameValue TypeMeaningExample
nvidia.com/mig.strategyStringMIG strategy in usesingle
nvidia.com/gpu.product (overridden)StringModel of the GPU (with MIG info added)A100-SXM4-40GB-MIG-1g.5gb
nvidia.com/gpu.count (overridden)IntegerNumber of MIG devices56
nvidia.com/gpu.memory (overridden)IntegerMemory of each MIG device in Mb5120
nvidia.com/gpu.multiprocessorsIntegerNumber of Multiprocessors for MIG device14
nvidia.com/gpu.slices.giIntegerNumber of GPU Instance slices1
nvidia.com/gpu.slices.ciIntegerNumber of Compute Instance slices1
nvidia.com/gpu.engines.copyIntegerNumber of DMA engines for MIG device1
nvidia.com/gpu.engines.decoderIntegerNumber of decoders for MIG device1
nvidia.com/gpu.engines.encoderIntegerNumber of encoders for MIG device1
nvidia.com/gpu.engines.jpegIntegerNumber of JPEG engines for MIG device0
nvidia.com/gpu.engines.ofaIntegerNumber of OfA engines for MIG device0

MIG 'mixed' strategy

With this strategy, a separate set of labels for each MIG device type is generated. The name of each MIG device type is defines as follows:

MIG_TYPE=mig-<slice_count>g.<memory_size>.gb
e.g.  MIG_TYPE=mig-3g.20gb
Label NameValue TypeMeaningExample
nvidia.com/mig.strategyStringMIG strategy in usemixed
nvidia.com/MIG_TYPE.countIntegerNumber of MIG devices of this type2
nvidia.com/MIG_TYPE.memoryIntegerMemory of MIG device type in Mb10240
nvidia.com/MIG_TYPE.multiprocessorsIntegerNumber of Multiprocessors for MIG device14
nvidia.com/MIG_TYPE.slices.ciIntegerNumber of GPU Instance slices1
nvidia.com/MIG_TYPE.slices.giIntegerNumber of Compute Instance slices1
nvidia.com/MIG_TYPE.engines.copyIntegerNumber of DMA engines for MIG device1
nvidia.com/MIG_TYPE.engines.decoderIntegerNumber of decoders for MIG device1
nvidia.com/MIG_TYPE.engines.encoderIntegerNumber of encoders for MIG device1
nvidia.com/MIG_TYPE.engines.jpegIntegerNumber of JPEG engines for MIG device0
nvidia.com/MIG_TYPE.engines.ofaIntegerNumber of OfA engines for MIG device0

Deployment via helm

The preferred method to deploy gpu-feature-discovery is as a daemonset using helm. Instructions for installing helm can be found here.

The helm chart for the latest release of GFD (v0.8.2) includes a number of customizable values. The most commonly overridden ones are:

  failOnInitError:
      Fail if there is an error during initialization of any label sources (default: true)
  sleepInterval:
      time to sleep between labeling (default "60s")
  migStrategy:
      pass the desired strategy for labeling MIG devices on GPUs that support it
      [none | single | mixed] (default "none)
  nfd.deploy, nfd.enabled:
      When set to true, deploy NFD as a subchart with all of the proper
      parameters set for it (default "true")
  runtimeClassName:
      the runtimeClassName to use, for use with clusters that have multiple runtimes

Note: The following document provides more information on the available MIG strategies and how they should be used Supporting Multi-Instance GPUs (MIG) in Kubernetes.

Please take a look in the following values.yaml files to see the full set of overridable parameters for both the top-level gpu-feature-discovery chart and the node-feature-discovery subchart.

Installing via helm install

Starting with v0.6.0 there are two ways to deploy gpu-feature-discovery via helm:

  1. As a standalone chart without configuration file or GPU sharing support
  2. As a subchart of the NVIDIA device plugin with configuration file and GPU sharing support
Deploying Standalone

When deploying standalone, begin by setting up GFD's helm repository and updating it at follows:

$ helm repo add nvgfd https://nvidia.github.io/gpu-feature-discovery
$ helm repo update

Then verify that the latest release (v0.8.2) of the plugin is available:

$ helm search repo nvgfd --devel
NAME                          CHART VERSION  APP VERSION    DESCRIPTION
nvgfd/gpu-feature-discovery   0.8.2     0.8.2     A Helm chart for ...

Once this repo is updated, you can begin installing packages from it to deploy the gpu-feature-discovery helm chart.

The most basic installation command without any options is then:

$ helm upgrade -i nvgfd nvgfd/gpu-feature-discovery \
  --version 0.8.2 \
  --namespace gpu-feature-discovery \
  --create-namespace

Disabling auto-deployment of NFD and running with a MIG strategy of 'mixed' in the default namespace.

$ helm upgrade -i nvgfd nvgfd/gpu-feature-discovery \
    --version=0.8.2 \
    --set allowDefaultNamespace=true \
    --set nfd.enabled=false \
    --set migStrategy=mixed

Note: You only need the to pass the --devel flag to helm search repo and the --version flag to helm upgrade -i if this is a pre-release version (e.g. <version>-rc.1). Full releases will be listed without this.

Deploying as a subchart of the NVIDIA device plugin

Starting with v0.12.0 of the NVIDIA device plugin, a configuration file can now be used to configure the plugin. This same configuration file can also be used to configure GFD. The GFD specific sections of the configuration file are specified in the gfd section as seen below:

version: v1
flags:
  migStrategy: "none"
  failOnInitError: true
  nvidiaDriverRoot: "/"
  plugin:
    passDeviceSpecs: false
    deviceListStrategy: "envvar"
    deviceIDStrategy: "uuid"
  gfd:
    oneshot: false
    noTimestamp: false
    outputFile: /etc/kubernetes/node-feature-discovery/features.d/gfd
    sleepInterval: 60s

The fields in this section map to the command line flags and environment variables available for configuring GFD.

For details on how to deploy GFD as part of the plugin please refer to:

Deploying via helm install with a direct URL to the helm package

If you prefer not to install from the gpu-feature-discovery helm repo, you can run helm install directly against the tarball of the components helm package. The examples below install the same daemonsets as the method above, except that they use direct URLs to the helm package instead of the helm repo.

Using the default values for the flags:

$ helm upgrade -i nvgfd \
    --set allowDefaultNamespace=true \
    https://nvidia.github.io/gpu-feature-discovery/stable/gpu-feature-discovery-0.8.2.tgz

Disabling auto-deployment of NFD and running with a MIG strategy of 'mixed' in the default namespace.

$ helm upgrade -i nvgfd \
    --set nfd.deploy=false \
    --set migStrategy=mixed \
    --set allowDefaultNamespace=true \
    https://nvidia.github.io/gpu-feature-discovery/stable/gpu-feature-discovery-0.8.2.tgz

Building and running locally with Docker

Download the source code:

git clone https://github.com/NVIDIA/gpu-feature-discovery

Build the docker image:

export GFD_VERSION=$(git describe --tags --dirty --always)
docker build . --build-arg GFD_VERSION=$GFD_VERSION -t nvcr.io/nvidia/gpu-feature-discovery:${GFD_VERSION}

Run it:

mkdir -p output-dir
docker run -v ${PWD}/output-dir:/etc/kubernetes/node-feature-discovery/features.d nvcr.io/nvidia/gpu-feature-discovery:${GFD_VERSION}

You should have set the default runtime of Docker to nvidia on your host or you can also use the --runtime=nvidia option:

docker run --runtime=nvidia nvcr.io/nvidia/gpu-feature-discovery:${GFD_VERSION}

Building and running locally on your native machine

Download the source code:

git clone https://github.com/NVIDIA/gpu-feature-discovery

Get dependies:

dep ensure

Build it:

export GFD_VERSION=$(git describe --tags --dirty --always)
go build -ldflags "-X main.Version=${GFD_VERSION}"

You can also use the Dockerfile.devel:

docker build . -f Dockerfile.devel -t gfd-devel
docker run -it gfd-devel
go build -ldflags "-X main.Version=devel"

Run it:

./gpu-feature-discovery --output=$(pwd)/gfd