Skip to content

Commit 0fcc949

Browse files
authored
Merge pull request #52744 from Jimmykhangnguyen/merged-main-dev-1.35
Merge main branch into dev-1.35
2 parents 0c94ca5 + f0c408d commit 0fcc949

File tree

70 files changed

+5805
-2096
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+5805
-2096
lines changed

content/en/blog/_posts/2025-05-22-wg-policy-spotlight.md renamed to content/en/blog/_posts/2025-10-18-wg-policy-spotlight.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,8 @@
22
layout: blog
33
title: "Spotlight on Policy Working Group"
44
slug: wg-policy-spotlight-2025
5-
draft: true
6-
date: 2025-05-22
7-
author: "Arujjwal Negi"
5+
date: 2025-10-18
6+
author: Arujjwal Negi
87
---
98

109
*(Note: The Policy Working Group has completed its mission and is no longer active. This article reflects its work, accomplishments, and insights into how a working group operates.)*
@@ -18,7 +17,7 @@ Through collaborative methods, this working group strove to bring clarity and co
1817
This blog post dives deeper into the work of the Policy Working Group, guided by insights from its former co-chairs:
1918

2019
- [Jim Bugwadia](https://twitter.com/JimBugwadia)
21-
- [Poonam Lamba](https://twitter.com/poonam-lamba)
20+
- [Poonam Lamba](https://twitter.com/poonam_lamba)
2221
- [Andy Suderman](https://twitter.com/sudermanjr)
2322

2423
_Interviewed by [Arujjwal Negi](https://twitter.com/arujjval)._
@@ -71,7 +70,7 @@ We worked on several Kubernetes policy-related projects. Our initiatives include
7170

7271
The charter of the Policy WG was to help standardize policy management for Kubernetes and educate the community on best practices.
7372

74-
To accomplish this we updated the Kubernetes documentation ([Policies | Kubernetes](https://kubernetes.io/docs/concepts/policy)), produced several whitepapers ([Kubernetes Policy Management](https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy/CNCF_Kubernetes_Policy_Management_WhitePaper_v1.pdf), [Kubernetes GRC](https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy_grc/Kubernetes_Policy_WG_Paper_v1_101123.pdf)), and created the Policy Reports API ([API reference](https://htmlpreview.github.io/?https://github.com/kubernetes-sigs/wg-policy-prototypes/blob/master/policy-report/docs/index.html)) which standardizes reporting across various tools. Several popular tools such as Falco, Trivy, Kyverno, kube-bench, and others support the Policy Report API. A major milestone for the Policy WG was promoting the Policy Reports API to a SIG-level API or finding it a stable home.
73+
To accomplish this we updated the Kubernetes documentation ([Policies | Kubernetes](https://kubernetes.io/docs/concepts/policy)), produced several whitepapers ([Kubernetes Policy Management](https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy/CNCF_Kubernetes_Policy_Management_WhitePaper_v1.pdf), [Kubernetes GRC](https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy_grc/Kubernetes_Policy_WG_Paper_v1_101123.pdf)), and created the Policy Reports API ([API reference](https://github.com/kubernetes-retired/wg-policy-prototypes/blob/master/policy-report/docs/api-docs.md)) which standardizes reporting across various tools. Several popular tools such as Falco, Trivy, Kyverno, kube-bench, and others support the Policy Report API. A major milestone for the Policy WG was promoting the Policy Reports API to a SIG-level API or finding it a stable home.
7574

7675
Beyond that, as [ValidatingAdmissionPolicy](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/) and [MutatingAdmissionPolicy](https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/) approached GA in Kubernetes, a key goal of the WG was to guide and educate the community on the tradeoffs and appropriate usage patterns for these built-in API objects and other CNCF policy management solutions like OPA/Gatekeeper and Kyverno.
7776

content/en/docs/concepts/configuration/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ for a comprehensive list.
116116
MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the
117117
appropriate Pods for other resources; for example, a Service that selects all `tier: frontend`
118118
Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`.
119-
See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app
119+
See the [guestbook](https://github.com/kubernetes/examples/tree/master/web/guestbook/) app
120120
for examples of this approach.
121121

122122
A Service can be made to span multiple Deployments by omitting release-specific labels from its

content/en/docs/concepts/workloads/controllers/job.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1185,10 +1185,6 @@ Another pattern is for a single Job to create a Pod which then creates other Pod
11851185
of custom controller for those Pods. This allows the most flexibility, but may be somewhat
11861186
complicated to get started with and offers less integration with Kubernetes.
11871187

1188-
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
1189-
starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/master/staging/spark/README.md)),
1190-
runs a spark driver, and then cleans up.
1191-
11921188
An advantage of this approach is that the overall process gets the completion guarantee of a Job
11931189
object, but maintains complete control over what Pods are created and how work is assigned to them.
11941190

content/en/docs/reference/command-line-tools-reference/feature-gates/ComponentSLIs.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,11 @@ stages:
1313
- stage: beta
1414
defaultValue: true
1515
fromVersion: "1.27"
16-
toVersion: "1.28"
16+
toVersion: "1.31"
1717
- stage: stable
1818
defaultValue: true
1919
locked: true
20-
fromVersion: "1.29"
21-
toVersion: "1.31"
22-
23-
removed: true
24-
20+
fromVersion: "1.32"
2521
---
2622
Enable the `/metrics/slis` endpoint on Kubernetes components like
2723
kubelet, kube-scheduler, kube-proxy, kube-controller-manager, cloud-controller-manager

content/en/docs/reference/command-line-tools-reference/feature-gates/SizeMemoryBackedVolumes.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,8 @@ stages:
1616
toVersion: "1.31"
1717
- stage: stable
1818
defaultValue: true
19+
locked: true
1920
fromVersion: "1.32"
20-
toVersion: "1.33"
21-
22-
removed: true
23-
2421
---
2522
Enable kubelets to determine the size limit for
2623
memory-backed volumes (mainly `emptyDir` volumes).

content/en/docs/reference/kubectl/kuberc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ In this example, the following settings were used:
5555
With this alias, running `kubectl getn pods` will default JSON output. However,
5656
if you execute `kubectl getn pods -oyaml`, the output will be in YAML format.
5757

58-
Full `kuberc` schema is available [here](/docs/reference/config-api/kubelet-config.v1beta1/).
58+
Full `kuberc` schema is available [here](/docs/reference/config-api/kuberc.v1beta1/).
5959

6060
### prependArgs
6161

content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md

Lines changed: 28 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,6 @@ All of these options are possible via the kubeadm configuration API.
1616
For more details on each field in the configuration you can navigate to our
1717
[API reference pages](/docs/reference/config-api/kubeadm-config.v1beta4/).
1818

19-
{{< note >}}
20-
Customizing the CoreDNS deployment of kubeadm is currently not supported. You must manually
21-
patch the `kube-system/coredns` {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
22-
and recreate the CoreDNS {{< glossary_tooltip text="Pods" term_id="pod" >}} after that. Alternatively,
23-
you can skip the default CoreDNS deployment and deploy your own variant.
24-
For more details on that see [Using init phases with kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases).
25-
{{< /note >}}
26-
2719
{{< note >}}
2820
To reconfigure a cluster that has already been created see
2921
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure).
@@ -173,8 +165,8 @@ patches:
173165
The directory must contain files named `target[suffix][+patchtype].extension`.
174166
For example, `kube-apiserver0+merge.yaml` or just `etcd.json`.
175167

176-
- `target` can be one of `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `etcd`
177-
and `kubeletconfiguration`.
168+
- `target` can be one of `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `etcd`,
169+
`kubeletconfiguration` and `corednsdeployment`.
178170
- `suffix` is an optional string that can be used to determine which patches are applied first
179171
alpha-numerically.
180172
- `patchtype` can be one of `strategic`, `merge` or `json` and these must match the patching formats
@@ -217,3 +209,29 @@ For more details you can navigate to our [API reference pages](/docs/reference/c
217209
kubeadm deploys kube-proxy as a {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}, which means
218210
that the `KubeProxyConfiguration` would apply to all instances of kube-proxy in the cluster.
219211
{{< /note >}}
212+
213+
## Customizing CoreDNS
214+
215+
kubeadm allows you to customize the CoreDNS Deployment with patches against the
216+
[`corednsdeployment` patch target](#patches).
217+
218+
Patches for other CoreDNS related API objects like the `kube-system/coredns`
219+
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} are currently not supported.
220+
You must manually patch any of these objects using kubectl and recreate the CoreDNS
221+
{{< glossary_tooltip text="Pods" term_id="pod" >}} after that.
222+
223+
Alternatively, you can disable the kubeadm CoreDNS deployment by including the following
224+
option in your `ClusterConfiguration`:
225+
226+
```yaml
227+
dns:
228+
disabled: true
229+
```
230+
231+
Also, by executing the following command:
232+
233+
```shell
234+
kubeadm init phase addon coredns --print-manifest --config my-config.yaml`
235+
```
236+
237+
you can obtain the manifest file kubeadm would create for CoreDNS on your setup.

content/en/docs/tasks/configure-pod-container/security-context.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ kubectl exec -it security-context-demo -- sh
191191
Check the process identity:
192192

193193
```shell
194-
$ id
194+
id
195195
```
196196

197197
The output is similar to this:
@@ -207,7 +207,7 @@ inside the container image.
207207
Check the `/etc/group` in the container image:
208208

209209
```shell
210-
$ cat /etc/group
210+
cat /etc/group
211211
```
212212

213213
You can see that uid `1000` belongs to group `50000`.

content/es/docs/concepts/storage/volumes.md

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -293,8 +293,6 @@ Debes configurar FC SAN zoning para asignar y enmascarar esos (volúmenes) LUNs
293293
para que los hosts Kubernetes pueda acceder a ellos.
294294
{{< /note >}}
295295

296-
Revisa el [ejemplo de canal de fibra](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) para más detalles.
297-
298296
### flocker (deprecado) {#flocker}
299297

300298
[Flocker](https://github.com/ClusterHQ/flocker) es un administrador open-source de volúmenes de contenedor agrupado por clúster.
@@ -561,8 +559,6 @@ Esto significa que puedes pre-poblar un volumen con tu conjunto de datos y servi
561559
Desafortunadamente, los volúmenes ISCSI solo se pueden montar por un único consumidor en modo lectura-escritura.
562560
Escritores simultáneos no está permitido.
563561

564-
Mira el [ejemplo iSCSI](https://github.com/kubernetes/examples/tree/master/volumes/iscsi) para más detalles.
565-
566562
### local
567563

568564
Un volumen `local` representa un dispositivo de almacenamiento local como un disco, una partición o un directorio.
@@ -635,8 +631,6 @@ NFS puede ser montado por múltiples escritores simultáneamente.
635631
Debes tener tu propio servidor NFS en ejecución con el recurso compartido exportado antes de poder usarlo.
636632
{{< /note >}}
637633

638-
Mira el [ ejemplo NFS ](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) para más información.
639-
640634
### persistentVolumeClaim {#persistentvolumeclaim}
641635

642636
Un volumen `persistenceVolumeClain` se utiliza para montar un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) en tu Pod. PersistentVolumeClaims son una forma en que el usuario "reclama" almacenamiento duradero (como un PersistentDisk GCE o un volumen ISCSI) sin conocer los detalles del entorno de la nube en particular.
@@ -675,8 +669,6 @@ spec:
675669
Asegúrate de tener un PortworxVolume con el nombre `pxvol` antes de usarlo en el Pod.
676670
{{< /note >}}
677671

678-
Para más detalles, mira los ejemplos de [volumen Portworx](https://github.com/kubernetes/examples/tree/master/staging/volumes/portworx/README.md).
679-
680672
### projected
681673

682674
Un volumen `projected` mapea distintas fuentes de volúmenes existentes en un mismo directorio.

content/es/docs/concepts/workloads/controllers/statefulset.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ proporcione un conjunto de réplicas sin estado, como un
3636

3737
## Limitaciones
3838

39-
* El almacenamiento de un determinado Pod debe provisionarse por un [Provisionador de PersistentVolume](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md) basado en la `storage class` requerida, o pre-provisionarse por un administrador.
39+
* El almacenamiento de un determinado Pod debe provisionarse por un [Provisionador de PersistentVolume](/docs/concepts/storage/persistent-volumes/) basado en la `storage class` requerida, o pre-provisionarse por un administrador.
4040
* Eliminar y/o reducir un StatefulSet *no* eliminará los volúmenes asociados con el StatefulSet. Este comportamiento es intencional y sirve para garantizar la seguridad de los datos, que da más valor que la purga automática de los recursos relacionados del StatefulSet.
4141
* Los StatefulSets actualmente necesitan un [Servicio Headless](/docs/concepts/services-networking/service/#headless-services) como responsable de la identidad de red de los Pods. Es tu responsabilidad crear este Service.
4242
* Los StatefulSets no proporcionan ninguna garantía de la terminación de los pods cuando se elimina un StatefulSet. Para conseguir un término de los pods ordenado y controlado en el StatefulSet, es posible reducir el StatefulSet a 0 réplicas justo antes de eliminarlo.

0 commit comments

Comments
 (0)