@@ -8,14 +8,14 @@ author: >
8
8
The DRA team
9
9
---
10
10
11
- [ Kubernetes 1.34] ( XXXXX ) is here, and it brings a huge wave of enhancements for Dynamic Resource Allocation (DRA)! This
12
- release marks a major milestone with the Structured Parameters feature graduating to General Availability (GA),
11
+ Kubernetes 1.34 is here, and it has brought a huge wave of enhancements for Dynamic Resource Allocation (DRA)! This
12
+ release marks a major milestone with many APIs in the ` resource.k8s.io ` group graduating to General Availability (GA),
13
13
unlocking the full potential of how you manage devices on Kubernetes. On top of that, several key features have
14
14
moved to beta, and a fresh batch of new alpha features promise even more expressiveness and flexibility.
15
15
16
16
Let's dive into what's new for DRA in Kubernetes 1.34!
17
17
18
- ## Structured Parameters is now GA
18
+ ## The core of DRA is now GA
19
19
20
20
The headline feature of the v1.34 release is that the core of DRA has graduated to General Availability.
21
21
@@ -28,7 +28,7 @@ With the graduation to GA, DRA is stable and will be part of Kubernetes for the
28
28
expect a steady stream of new features being added to DRA over the next several Kubernetes releases, but they will
29
29
not make any breaking changes to DRA. So users and developers of DRA drivers can start adopting DRA with confidence.
30
30
31
- Starting with Kubernetes 1.34, DRA is enabled by default; DRA features that have reached beta are also enabled by default.
31
+ Starting with Kubernetes 1.34, DRA is enabled by default; the DRA features that have reached beta are ** also** enabled by default.
32
32
That's because the default API version for DRA is now the stable ` v1 ` version, and not the earlier versions
33
33
(eg: ` v1beta1 ` or ` v1beta2 ` ) that needed explicit opt in.
34
34
@@ -39,12 +39,13 @@ management with DRA.
39
39
40
40
[ Admin access labelling] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#admin-access ) has been updated.
41
41
In v1.34, you can restrict device support to people (or software) authorized to use it. This is meant
42
- as a way to avoid privilege escalation through use of hardware devices that can bypass other security controls.
42
+ as a way to avoid privilege escalation if a DRA driver grants additional privileges when admin access is requested
43
+ and to avoid accessing devices which are in use by normal applications, potentially in another namespace.
43
44
The restriction works by ensuring that only users with access to a namespace with the
44
45
` resource.k8s.io/admin-access: "true" ` label are authorized to create
45
46
ResourceClaim or ResourceClaimTemplates objects with the ` adminAccess ` field set to true. This ensures that non-admin users cannot misuse the feature.
46
47
47
- [ Prioritized List ] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#prioritized-list ) lets users specify
48
+ [ Prioritized list ] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#prioritized-list ) lets users specify
48
49
a list of acceptable devices for their workloads, rather than just a single type of device. So while the workload
49
50
might run best on a single high-performance GPU, it might also be able to run on 2 mid-level GPUs. The scheduler will
50
51
attempt to satisfy the alternatives in the list in order, so the workload will be allocated the best set of devices
@@ -64,27 +65,27 @@ the familiar, simpler request syntax while still benefiting from dynamic allocat
64
65
workloads to start using DRA without modifications, simplifying the transition to DRA for both application developers and
65
66
cluster administrators.
66
67
67
- [ Consumable Capacity ] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#consumable-capacity ) introduces a flexible
68
+ [ Consumable capacity ] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#consumable-capacity ) introduces a flexible
68
69
device sharing model where multiple, independent resource claims from unrelated
69
70
pods can each be allocated a share of the same underlying physical device. This new capability is managed through optional,
70
71
administrator-defined sharing policies that govern how a device's total capacity is divided and enforced by the platform for
71
72
each request. This allows for sharing of devices in scenarios where pre-defined partitions are not viable.
72
73
73
- [ Binding Conditions ] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#binding-conditions ) improves scheduling
74
+ [ Binding conditions ] ( /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#binding-conditions ) improve scheduling
74
75
reliability for certain classes of devices by allowing the Kubernetes scheduler to delay binding a pod to a node until its
75
76
required external resources, such as attachable devices or FPGAs, are confirmed to be fully prepared. This prevents premature
76
77
pod assignments that could lead to failures and ensures more robust, predictable scheduling by explicitly modeling resource
77
78
readiness before the pod is committed to a node.
78
79
79
- Resource Health Status for DRA improves observability by exposing the health status of devices allocated to a Pod via Pod Status.
80
+ _ Resource health status _ for DRA improves observability by exposing the health status of devices allocated to a Pod via Pod Status.
80
81
This works whether the device is allocated through DRA or Device Plugin. This makes it easier to understand the cause of an
81
82
unhealthy device and respond properly.
82
83
83
84
## What’s next?
84
85
85
86
While DRA got promoted to GA this cycle, the hard work on DRA doesn't stop. There are several features in alpha and beta that
86
87
we plan to bring to GA in the next couple of releases and we are looking to continue to improve performance, scalability
87
- and reliability of DRA. So expect an equally ambitious set of features in DRA for 1.35.
88
+ and reliability of DRA. So expect an equally ambitious set of features in DRA for the 1.35 release .
88
89
89
90
## Getting involved
90
91
0 commit comments