-
Notifications
You must be signed in to change notification settings - Fork 5k
feat: optimize memory usage in image load with Docker client integration #21103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: yankay The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Can one of the admins verify this patch? |
HI @medyagh |
@yankay So with this we don't need to save the image before loading?
This will not help the case when we ned to load into multiple clusters:
Where the image is saved? how is it cleaned up? Regardless, loading entire image to memory does not make sense in any case, so this looks like a good change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you @yankay do you mind putting a Before/After this PR in the description ? to proof the effectiveness ?
/ok-to-test |
This comment has been minimized.
This comment has been minimized.
HI @nirs The image is saved at |
Thanks @medyagh Memory usage has indeed been reduced, but the execution time has increased significantly. I'll investigate the cause. This resulted in multiple repeated So the PR is changed with another solution by using the docker client directly. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Signed-off-by: Kay Yan <[email protected]>
kvm2 driver with docker runtime
Times for minikube start: 54.2s 53.0s 52.8s 52.7s 53.2s Times for minikube ingress: 15.6s 16.1s 14.6s 15.2s 16.1s docker driver with docker runtime
Times for minikube (PR 21103) start: 24.6s 23.6s 23.0s 24.5s 26.9s Times for minikube ingress: 13.3s 13.8s 12.3s 12.3s 13.8s docker driver with containerd runtime
Times for minikube start: 20.9s 22.6s 21.3s 24.4s 22.6s Times for minikube ingress: 22.8s 22.8s 22.8s 22.9s 38.8s |
@yankay Results looks very good! Can you compare with the split process?
We still have the issue of cleanup - keeping the image in the cache directory is not good, the user does not know that the image was added and nobody will clean it up. But I think this is not an issue in your change and it was already there. |
Thanks, the performance information has been added to the PR description :-) |
Fix: #17785
Because bufferedOpener attempts to load the entire layer into memory, this issue occurs. See https://github.com/google/go-containerregistry/blob/main/pkg/v1/daemon/image.go#L71
This PR addresses high memory consumption issues during minikube image load operations by using docker client directly.
Comparative Analysis: Minikube Image Load Performance
Performance Metrics Summary
minikube image load
)./out/minikube-linux-amd64
)docker save
+minikube image load
)