-
Notifications
You must be signed in to change notification settings - Fork 1.8k
enhancement(kubernetes_logs source): Allow disabling listwatching of namespaces #23601
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The namespaces cache will be an empty store in this case. Down the line all of the code _should_ just work for instance emitting an error log if you try to access namespace labels.
This seems to work now! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some suggestions to improve readability. Let me know if you have any questions about my feedback!
website/cue/reference/components/sources/generated/kubernetes_logs.cue
Outdated
Show resolved
Hide resolved
website/cue/reference/components/sources/generated/kubernetes_logs.cue
Outdated
Show resolved
Hide resolved
website/cue/reference/components/sources/generated/kubernetes_logs.cue
Outdated
Show resolved
Hide resolved
website/cue/reference/components/sources/generated/kubernetes_logs.cue
Outdated
Show resolved
Hide resolved
Co-authored-by: Rosa Trieu <[email protected]>
Great feedback, thank you! |
Summary
On clusters with high numbers of namespaces, Vector's memory can grow rather large. Additionally the load that Vector puts on kube-apiserver can be rather heavy when rolling the Vector daemonset or otherwise have many Vector pods restarting. This change allows you to opt out of having namespace labels available for enrichment. I believe that if you try to use them with namespace listwatching disabled Vector will gracefully log an error rather than crash or something along those lines which feels like a good way to handle this scenario?
This is my first contribution to Vector, I think I've read through the guidelines and such but am happy to make any changes you think would be best, this is just my guess at a good way to solve this issue for us.
Vector configuration
There's quite significant amount of config on my test setup in our staging clusters. I can try to trim it down and get a test setup going if we'd be more comfortable with this change being tested that way first though.
How did you test this PR?
I backported these changes (technically just
6389f2f0c16a157579e2cfe4662d280e8688608a
, the changes I made after that have not been deployed anywhere) to 0.37.1 and built/deployed images based on this into our staging environment. This cut the memory usage of Vector pods in half in a cluster with many tens of thousands of namespaces. I have not tested the version of this based on master. I'm happy to get a test setup going to try this out if we really want to, the clusters I have available to me now need to be updated too significantly for me to want to bother with it on them at the moment.Change Type
Is this a breaking change?
Does this PR include user facing changes?
no-changelog
label to this PR.References
Implements: #22990
Notes
@vectordotdev/vector
to reach out to us regarding this PR.pre-push
hook, please see this template.cargo fmt --all
cargo clippy --workspace --all-targets -- -D warnings
cargo nextest run --workspace
(alternatively, you can runcargo test --all
)git merge origin master
andgit push
.Cargo.lock
), pleaserun
cargo vdev build licenses
to regenerate the license inventory and commit the changes (if any). More details here.