Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

An error occurred while submitting your form. Please try again or file a bug report. Close

MicroK8s on NVIDIA DGX

On DGX systems you will need to enable GPU support right after you install MicroK8s:

sudo snap install microk8s --classic
sudo microk8s enable gpu

MicroK8s installs the NVIDIA operator which allows you to take advantage of the GPU hardware available.

Verify the installation

To verify the installation works as expected you can try to perform a CUDA vector addition by applying the following manifest (save this file and use kubectl apply):

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vector-add
spec:
  restartPolicy: OnFailure
  containers:
    - name: cuda-vector-add
      image: "k8s.gcr.io/cuda-vector-add:v0.1"
      resources:
        limits:
          nvidia.com/gpu: 1

After the addition completes, check the logs of the cuda-vector-add pod to see if it succeeded.

Multi-Instance GPU (MIG)

Multi-Instance GPU (MIG) allows GPU partitioning so they can be safely used by CUDA applications. Starting from MicroK8s v1.23 MIG can be configured via configMaps, please see the NVIDIA operator docs on this topic.

Last updated 2 years ago. Help improve this document in the forum.