-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Change names for certificate keys in secret #8462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Welcome @aneurinprice! |
Hi @aneurinprice. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: aneurinprice The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
clientCaFile: flag.String("client-ca-file", "/etc/tls-certs/ca.crt", "Path to CA PEM file."), | ||
tlsCertFile: flag.String("tls-cert-file", "/etc/tls-certs/tls.crt", "Path to server certificate PEM file."), | ||
tlsPrivateKey: flag.String("tls-private-key", "/etc/tls-certs/tls.key", "Path to server certificate key PEM file."), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing these defaults is a backwards incompatible change, so we can't do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the concern, how do you think we should approach this? I feel very strongly that we should aim to use the tls secret type key names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you that that would be ideal, but what problem are you trying to fix here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem I'm trying to fix is that I cannot use cert-manager to manage the webhook secret without doing some rather nasty kustomize. I'm trying to avoid having unmanaged certificates and deploying them via shell script.
My current workaround is as follows:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: vpa-admission-controller
namespace: kube-system
spec:
isCA: false
commonName: vpa-webhook
secretName: vpa-tls-certs
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned-clusterissuer
kind: ClusterIssuer
group: cert-manager.io
dnsNames:
- vpa-webhook.kube-system.svc
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: vertical-pod-autoscaler
namespace: flux-system
spec:
interval: 10m
targetNamespace: kube-system
sourceRef:
kind: GitRepository
name: kubernetes-autoscaler
path: "./vertical-pod-autoscaler/deploy"
prune: true
timeout: 1m
patches:
- patch: |
- op: replace
path: /spec/template/spec/volumes
value:
- name: tls-certs
secret:
secretName: vpa-tls-certs
items:
- key: tls.crt
path: serverCert.pem
- key: tls.key
path: serverKey.pem
- key: ca.crt
path: caCert.pem
target:
kind: Deployment
name: vpa-admission-controller
namespace: kube-system
My initial thought was to try and bypass this at the cert-manager level but they seemingly have no interest in allowing custom key names in the secret.
It was then I realised that VPA was not using a TLS secret or the associated key names. I decided that the fix should probably be here instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too familiar with how flux works, but is this using the manifests from https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/deploy/ ?
If so, I'd highly recommend not using them. We don't do a good job at communicating this, but I don't consider those production ready.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but is this using the manifests from ...
You are correct, that is where I am deploying from.
We don't do a good job at communicating this, but I don't consider those production ready.
Ah. I see.
Not to be ungrateful, but the deployment mechanism via bash scripts is.... painfully outdated. Is there not a better way to get these manifests? Perhaps naively of me, but I would have assumed that anything in main
would have been good to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. I see.
Not to be ungrateful, but the deployment mechanism via bash scripts is.... painfully outdated. Is there not a better way to get these manifests? Perhaps naively of me, but I would have assumed that anything in
main
would have been good to go.
You're 100% correct here. My ideal would be to have an official helm chart that we build and maintain, but at the moment there aren't many contributors to the project, so that hasn't happened yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay cool, I'd quite like the opportunity to contribute so perhaps thats something I can do.
In the meantime, do you think I'm good to keep my existing kustomize but point it at vertical-pod-autoscaler-1.4.1
tag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't necessary see a problem with that. We're not likely to change the deploy files in a backwards incompatible way. However, it means that if you want to change anything from what's set in those manifests, you'll need to do patches, which isn't great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're 100% correct here. My ideal would be to have an official helm chart that we build and maintain, but at the moment there aren't many contributors to the project, so that hasn't happened yet.
We talked about this before and decided not to do it. Maybe we should talk about it again?
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #8460
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: