Skip to content

Commit c81472c

Browse files
authored
Merge pull request #2066 from tkatila/npu-plugin
NPU plugin support
2 parents d4d9b65 + a8dfbe7 commit c81472c

32 files changed

+1812
-23
lines changed

.github/workflows/lib-build.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ jobs:
2626
- intel-dsa-plugin
2727
- intel-iaa-plugin
2828
- intel-idxd-config-initcontainer
29+
- intel-npu-plugin
2930

3031
# # Demo images
3132
- crypto-perf
@@ -35,6 +36,7 @@ jobs:
3536
- sgx-sdk-demo
3637
- sgx-aesmd-demo
3738
- dsa-dpdk-dmadevtest
39+
- intel-npu-demo
3840
builder: [buildah, docker]
3941
steps:
4042
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4

README.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ Table of Contents
2222
* [DSA device plugin](#dsa-device-plugin)
2323
* [DLB device plugin](#dlb-device-plugin)
2424
* [IAA device plugin](#iaa-device-plugin)
25+
* [NPU device plugin](#npu-device-plugin)
2526
* [Device Plugins Operator](#device-plugins-operator)
2627
* [XeLink XPU Manager sidecar](#xelink-xpu-manager-sidecar)
2728
* [Intel GPU Level-Zero sidecar](#intel-gpu-levelzero)
@@ -182,12 +183,17 @@ Balancer accelerator(DLB).
182183
The [IAA device plugin](cmd/iaa_plugin/README.md) supports acceleration using
183184
the Intel Analytics accelerator(IAA).
184185

186+
### NPU Device Plugin
187+
188+
The [NPU device plugin](cmd/npu_plugin/README.md) supports acceleration using
189+
the Intel Neural Processing Unit(NPU).
190+
185191
## Device Plugins Operator
186192

187193
To simplify the deployment of the device plugins, a unified device plugins
188194
operator is implemented.
189195

190-
Currently the operator has support for the DSA, DLB, FPGA, GPU, IAA, QAT, and
196+
Currently the operator has support for the DSA, DLB, FPGA, GPU, IAA, QAT, NPU, and
191197
Intel SGX device plugins. Each device plugin has its own custom resource
192198
definition (CRD) and the corresponding controller that watches CRUD operations
193199
to those custom resources.
@@ -236,6 +242,8 @@ The summary of resources available via plugins in this repository is given in th
236242
* [intelgpu-job.yaml](demo/intelgpu-job.yaml)
237243
* `iaa.intel.com` : `wq-user-[shared or dedicated]`
238244
* [iaa-accel-config-demo-pod.yaml](demo/iaa-accel-config-demo-pod.yaml)
245+
* `npu.intel.com` : `accel`
246+
* [intel-npu-workload.yaml](demo/intel-npu-workload.yaml)
239247
* `qat.intel.com` : `generic` or `cy`/`dc`/`asym-dc`/`sym-dc`
240248
* [compress-perf-dpdk-pod-requesting-qat-dc.yaml](deployments/qat_dpdk_app/compress-perf/compress-perf-dpdk-pod-requesting-qat-dc.yaml)
241249
* [crypto-perf-dpdk-pod-requesting-qat-cy.yaml](deployments/qat_dpdk_app/crypto-perf/crypto-perf-dpdk-pod-requesting-qat-cy.yaml)
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
## This is a generated file, do not edit directly. Edit build/docker/templates/intel-npu-plugin.Dockerfile.in instead.
2+
##
3+
## Copyright 2022 Intel Corporation. All Rights Reserved.
4+
##
5+
## Licensed under the Apache License, Version 2.0 (the "License");
6+
## you may not use this file except in compliance with the License.
7+
## You may obtain a copy of the License at
8+
##
9+
## http://www.apache.org/licenses/LICENSE-2.0
10+
##
11+
## Unless required by applicable law or agreed to in writing, software
12+
## distributed under the License is distributed on an "AS IS" BASIS,
13+
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
## See the License for the specific language governing permissions and
15+
## limitations under the License.
16+
###
17+
ARG CMD=npu_plugin
18+
## FINAL_BASE can be used to configure the base image of the final image.
19+
##
20+
## This is used in two ways:
21+
## 1) make <image-name> BUILDER=<docker|buildah>
22+
## 2) docker build ... -f <image-name>.Dockerfile
23+
##
24+
## The project default is 1) which sets FINAL_BASE=gcr.io/distroless/static
25+
## (see build-image.sh).
26+
## 2) and the default FINAL_BASE is primarily used to build Redhat Certified Openshift Operator container images that must be UBI based.
27+
## The RedHat build tool does not allow additional image build parameters.
28+
ARG FINAL_BASE=registry.access.redhat.com/ubi9-micro:latest
29+
###
30+
##
31+
## GOLANG_BASE can be used to make the build reproducible by choosing an
32+
## image by its hash:
33+
## GOLANG_BASE=golang@sha256:9d64369fd3c633df71d7465d67d43f63bb31192193e671742fa1c26ebc3a6210
34+
##
35+
## This is used on release branches before tagging a stable version.
36+
## The main branch defaults to using the latest Golang base image.
37+
ARG GOLANG_BASE=golang:1.24-bookworm
38+
###
39+
FROM ${GOLANG_BASE} AS builder
40+
ARG DIR=/intel-device-plugins-for-kubernetes
41+
ARG GO111MODULE=on
42+
ARG LDFLAGS="all=-w -s"
43+
ARG GOFLAGS="-trimpath"
44+
ARG GCFLAGS="all=-spectre=all -N -l"
45+
ARG ASMFLAGS="all=-spectre=all"
46+
ARG GOLICENSES_VERSION
47+
ARG EP=/usr/local/bin/intel_npu_device_plugin
48+
ARG CMD
49+
WORKDIR ${DIR}
50+
COPY . .
51+
RUN (cd cmd/${CMD}; GO111MODULE=${GO111MODULE} GOFLAGS=${GOFLAGS} CGO_ENABLED=0 go install -gcflags="${GCFLAGS}" -asmflags="${ASMFLAGS}" -ldflags="${LDFLAGS}") && install -D /go/bin/${CMD} /install_root${EP}
52+
RUN install -D ${DIR}/LICENSE /install_root/licenses/intel-device-plugins-for-kubernetes/LICENSE \
53+
&& if [ ! -d "licenses/$CMD" ] ; then \
54+
GO111MODULE=on GOROOT=$(go env GOROOT) go run github.com/google/go-licenses@${GOLICENSES_VERSION} save "./cmd/$CMD" \
55+
--save_path /install_root/licenses/$CMD/go-licenses ; \
56+
else mkdir -p /install_root/licenses/$CMD/go-licenses/ && cd licenses/$CMD && cp -r * /install_root/licenses/$CMD/go-licenses/ ; fi && \
57+
echo "Verifying installed licenses" && test -e /install_root/licenses/$CMD/go-licenses
58+
###
59+
FROM ${FINAL_BASE}
60+
COPY --from=builder /install_root /
61+
ENTRYPOINT ["/usr/local/bin/intel_npu_device_plugin"]
62+
LABEL vendor='Intel®'
63+
LABEL org.opencontainers.image.source='https://github.com/intel/intel-device-plugins-for-kubernetes'
64+
LABEL maintainer="Intel®"
65+
LABEL version='devel'
66+
LABEL release='1'
67+
LABEL name='intel-npu-plugin'
68+
LABEL summary='Intel® NPU device plugin for Kubernetes'
69+
LABEL description='The NPU device plugin provides access to Intel CPU neural processing unit (NPU) device files'
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
#define _ENTRYPOINT_ /usr/local/bin/intel_npu_device_plugin
2+
ARG CMD=npu_plugin
3+
4+
#include "default_plugin.docker"
5+
6+
LABEL name='intel-npu-plugin'
7+
LABEL summary='Intel® NPU device plugin for Kubernetes'
8+
LABEL description='The NPU device plugin provides access to Intel CPU neural processing unit (NPU) device files'

cmd/npu_plugin/README.md

Lines changed: 145 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,145 @@
1+
# Intel NPU device plugin for Kubernetes
2+
3+
Table of Contents
4+
5+
* [Introduction](#introduction)
6+
* [Modes and Configuration Options](#modes-and-configuration-options)
7+
* [UMD, KMD and firmware](#umd-kmd-and-firmware)
8+
* [Pre-built Images](#pre-built-images)
9+
* [Installation](#installation)
10+
* [Install with NFD](#install-with-nfd)
11+
* [Install with Operator](#install-with-operator)
12+
* [Verify Plugin Registration](#verify-plugin-registration)
13+
* [Testing and Demos](#testing-and-demos)
14+
15+
## Introduction
16+
17+
Intel NPU plugin facilitates Kubernetes workload offloading by providing access to Intel CPU neural processing units supported by the host kernel.
18+
19+
The following CPU families are currently detected by the plugin:
20+
* Core Ultra Series 1 (Meteor Lake)
21+
* Core Ultra Series 2 (Arrow Lake)
22+
* Core Ultra 200V Series (Lunar Lake)
23+
24+
Intel NPU plugin registers a resource to the Kubernetes cluster:
25+
| Resource | Description |
26+
|:---- |:-------- |
27+
| npu.intel.com/accel | NPU |
28+
29+
## Modes and Configuration Options
30+
31+
| Flag | Argument | Default | Meaning |
32+
|:---- |:-------- |:------- |:------- |
33+
| -shared-dev-num | int | 1 | Number of containers that can share the same NPU device |
34+
35+
The plugin also accepts a number of other arguments (common to all plugins) related to logging.
36+
Please use the -h option to see the complete list of logging related options.
37+
38+
## UMD, KMD, and Firmware
39+
40+
To run workloads on the NPU device, three components are required:
41+
42+
- **UMD (User Mode Driver):** Must be included in the container image. Download it from the [Intel NPU driver](https://github.com/intel/linux-npu-driver/) project.
43+
- **KMD (Kernel Mode Driver):** Provided by recent Linux distributions (e.g., Ubuntu 24.04) as part of the operating system.
44+
- **Firmware:** Also included in modern Linux distributions, or available from [linux-firmware](https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/intel/vpu) and [intel-npu-driver](https://github.com/intel/linux-npu-driver/tree/main/firmware/bin).
45+
46+
For a detailed overview, see the Intel NPU driver [documentation](https://github.com/intel/linux-npu-driver/blob/main/docs/overview.md).
47+
48+
An example [demo workload](#testing-and-demos) is provided in this repository.
49+
50+
For reference:
51+
- The NPU KMD source is in the [Linux kernel](https://github.com/torvalds/linux/tree/master/drivers/accel/ivpu).
52+
- Firmware sources are in [linux-firmware](https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/intel/vpu) and [intel-npu-driver](https://github.com/intel/linux-npu-driver/tree/main/firmware/bin).
53+
54+
## Pre-built Images
55+
56+
[Pre-built images](https://hub.docker.com/r/intel/intel-npu-plugin)
57+
are available on the Docker hub. These images are automatically built and uploaded
58+
to the hub from the latest main branch of this repository.
59+
60+
Release tagged images of the components are also available on the Docker hub, tagged with their
61+
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
62+
repository.
63+
64+
See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
65+
66+
## Installation
67+
68+
There are multiple ways to install Intel NPU plugin to a cluster. The most common methods are described below.
69+
70+
> **Note**: Replace `<RELEASE_VERSION>` with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
71+
72+
> **Note**: Add ```--dry-run=client -o yaml``` to the ```kubectl``` commands below to visualize the YAML content being applied.
73+
74+
### Install with NFD
75+
76+
Deploy NPU plugin with the help of NFD ([Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery)). It detects the presence of Intel NPUs and labels them accordingly. NPU plugin's node selector is used to deploy plugin to nodes which have such a NPU label.
77+
78+
```bash
79+
# Start NFD - if your cluster doesn't have NFD installed yet
80+
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=<RELEASE_VERSION>'
81+
82+
# Create NodeFeatureRules for detecting NPUs on nodes
83+
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'
84+
85+
# Create NPU plugin daemonset
86+
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/npu_plugin/overlays/nfd_labeled_nodes?ref=<RELEASE_VERSION>'
87+
```
88+
89+
### Install with Operator
90+
91+
NPU plugin can be installed with the Intel Device Plugin Operator. It allows configuring NPU plugin parameters without kustomizing the deployment files. The general installation is described in the [install documentation](../operator/README.md#installation).
92+
93+
### Verify Plugin Registration
94+
95+
You can verify that the plugin has been installed on the expected nodes by searching for the relevant
96+
resource allocation status on the nodes:
97+
98+
```bash
99+
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\n'}{' accel: '}{.status.allocatable.npu\.intel\.com/accel}{'\n'}"
100+
master
101+
accel: 1
102+
```
103+
104+
## Testing and Demos
105+
106+
The NPU plugin functionality can be verified by deploying a [npu-plugin-demo](../../demo/intel-npu-demo/) image which runs tests with the Intel NPU.
107+
108+
1. Make the image available to the cluster:
109+
110+
Build image:
111+
112+
```bash
113+
$ make intel-npu-demo
114+
```
115+
116+
Tag and push the `intel-npu-demo` image to a repository available in the cluster. Then modify the [intel-npu-workload.yaml's](../../demo/intel-npu-workload.yaml) image location accordingly:
117+
118+
```bash
119+
$ docker tag intel/intel-npu-demo:devel <repository>/intel/intel-npu-demo:latest
120+
$ docker push <repository>/intel/intel-npu-demo:latest
121+
$ $EDITOR ${INTEL_DEVICE_PLUGINS_SRC}/demo/intel-npu-workload.yaml
122+
```
123+
124+
If you are running the demo on a single node cluster, and do not have your own registry, you can add image to node image cache instead. For example, to import docker image to containerd cache:
125+
126+
```bash
127+
$ docker save intel/intel-npu-demo:devel | ctr -n k8s.io images import -
128+
```
129+
Running `ctr` may require the use of `sudo`.
130+
131+
1. Create a job:
132+
133+
```bash
134+
$ kubectl apply -f ${INTEL_DEVICE_PLUGINS_SRC}/demo/intel-npu-workload.yaml
135+
job.batch/npu-workload created
136+
```
137+
138+
1. Review the job's logs:
139+
140+
```bash
141+
$ kubectl get pods | fgrep npu-workload
142+
# substitute the 'xxxxx' below for the pod name listed above
143+
$ kubectl logs npu-workload-xxxxx
144+
<log output>
145+
```

0 commit comments

Comments
 (0)