Skip to content

Conversation

@mellena1
Copy link

Great chart/tool, thanks for making this!

I use a GitOps approach for my cluster deployments, and didn't want to commit the claimToken in plain text to a git repo. So I decided to break it out into a secret!

This PR will allow the end user to specify either the name of an existing secret or still pass the plain text version and the secret will be made for them.

Feel free to make any suggestions and I can make changes or you can make any necessary changes yourself.

@billimek
Copy link
Contributor

@mellena1 this is a good PR, and it's good to have more options.

As an aside, if you are using gitops (I am too), take a look at the vault-secrets-operator which will persist secrets from a vault instance.

I use this for the plex claim token in my setup, here:

@mellena1
Copy link
Author

Hmm interesting. I'll have to check it out. I've been using the sealed-secrets project for a while now. I'll have to see if it's worth switching over.

@mellena1 mellena1 changed the title Turn the ClaimToken in a K8s Secret Turn the ClaimToken into a K8s Secret Oct 21, 2019
@denebeim
Copy link

denebeim commented Nov 6, 2019

The claim is only good for 4 minutes. Why are you saving it anyway? I'm just curious, but it seems like this should be passed on the command line or in the call or whatever.

@simplyzee simplyzee self-requested a review April 16, 2020 08:55
@simplyzee simplyzee self-assigned this Apr 16, 2020

func generatePod(cwd string, env []string, args []string) *corev1.Pod {
envVars := toCoreV1EnvVar(env)
envVars = addFromSecretsToCoreV1EnvVar(envVars)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these values need to be set on the transcoder pods as the transcoder doesn't have to authenticate with PMS, as far as I am aware? 🤔 perhaps I am wrong though! It's been a while...


# Override this with the plex claim token from plex.tv/claim
claimToken: ""
claimToken:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to not break existing users, how about we leave this field as is, and add a new field:

# Configure the Plex 'claim token' (plex.tv/claim) using a manually created
# Secret resource stored in the cluster.
# Only one of 'claimToken' or 'claimTokenSecret' may be specified.
# 'claimToken' is deprecated in favour of 'claimTokenSecret' to avoid persisting
# sensitive information into Helm releases.
claimTokenSecret:
  ## Name of a Secret resource containing a Plex 'claim' token.
  name: ""
  ## Key in the Secret resource of the claim token
  key: "token"

heritage: {{ .Release.Service }}
type: Opaque
data:
{{ .Values.claimToken.secretKey }}: {{ .Values.claimToken.value | b64enc | quote }}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once the config for this is split into claimToken and claimTokenSecret, can we change this Secret resource so that it does not use the secretKey or name field from the claimToken 'structure', and instead just hardcodes it (i.e. to the 'fullname' template, and then just some static 'key' for the Secret (specifically in the case where a user specifies claimToken and not claimTokenSecret).

| `persistence.transcode.storageClass` | Type of persistent volume claim | `-` |
| `persistence.data.size` | Size of persistent volume claim | `40Gi` |
| `persistence.data.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.data.claimName`| Use an existing PVC to persist data | `nil` |
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 didn't realise these are incorrect 👀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants