-
Notifications
You must be signed in to change notification settings - Fork 12
include kubectl describe output #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Query pg_settings to capture current Postgres settings directly from the DB.
cluster-info returns a lot of sensitive information and it out-of-scope for the support export.
Remove the `kubectl describe crds` and replace with getting the CRDs via the correct APIs. This brings CRDs into the support export output on the same level as pods, sts, etc.
Treats PGAdmin resources separately since they are independent of a PostgresCluster. Gathers all PGAdmin resources in a namespace. This is usually one, but it could be several.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. One leftover comment that could potentially be removed.
internal/cmd/export.go
Outdated
cmd *cobra.Command, | ||
) error { | ||
writeInfo(cmd, "Collecting events...") | ||
// list, err := clientset.CoreV1().Events(namespace).List(ctx, metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Can this comment be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed from 'events' to 'CRDs'.
internal/cmd/export.go
Outdated
writeInfo(cmd, "Running kubectl describe clusterrole...") | ||
err = runKubectlCommand(tw, cmd, clusterName+"/describe/clusterrole", "describe", "clusterrole", "postgres-operator") | ||
if err != nil { | ||
writeInfo(cmd, "Could not find clusterrole 'postgres-operator'. Looking for 'postgresoperator'...") | ||
writeInfo(cmd, "Running kubectl describe clusterrole...") | ||
err = runKubectlCommand(tw, cmd, clusterName+"/describe/clusterrole", "describe", "clusterrole", "postgresoperator") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔧 I think we need some comment here explaining why postgres-operator
and postgresoperator
are being checked.
I kinda wish we could get the clusterrole by a label/annotation, but that's gonna be dependent on the installation method, so that's also not foolproof. OK, so
a) no sure way to identify our clusterrole
b) no easy way to ask the user to put in the name
c) so we do a best effort by searching for both of the common names without assuming which it will be / what sort of env we're in -- seems reasonable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, this is just a best-effort to cover that majority of cases. I agree with a comment to clarify why we have this extra step. I add this comment to clusterrole and clusterrolebinding
// Resource name is generally 'postgres-operator' but in some environments
// like Openshift it could be 'postgresoperator'
internal/cmd/export.go
Outdated
if err != nil { | ||
writeInfo(cmd, fmt.Sprintf("Error running kubectl describe clusterrole: %s", err)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Should this be tucked into the logic block btw 611-615? Or wait -- should we print out the err we got on 610 before overwriting it with the err on 614?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I think there's a way to improve the flow and really show the check for 'postgresoperator' is a special case without make the overall structure too-hard-to-follow at a glance.
internal/cmd/export.go
Outdated
writeInfo(cmd, "Collecting CRDs...") | ||
// list, err := clientset.CoreV1().Events(namespace).List(ctx, metav1.ListOptions{}) | ||
|
||
labelSelector := "app.kubernetes.io/name=pgo" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this label the same across our installers (helm/kustomize)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm....I didn't know it could be different. I will have to check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so this was a bit of a challenge to work around. That label is not found in a few installers. So I refactored how it filters the CRDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactor is pushed
This makes it more clear that 'postgresoperator' is a special case for certain installers.
Some installers don't label the CRD with app.kubernetes.io/name=pgo. This refactor filters all the CRDs for a name containing "postgres-operator.crunchydata.com"
No description provided.