-
Notifications
You must be signed in to change notification settings - Fork 38
Updated the isSignatureAuthorAccepted to use timeout to prevent indefinite hanging
#421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…efinite hanging Since the root cause of indefinite hanging during GPG verification operations (https://issues.redhat.com/browse/OCPBUGS-57893) is currently unknown, add a timeout workaround to mitigate the impact. This prevents CRI-O image pulls with signature verification from hanging indefinitely. Signed-off-by: Qi Wang <[email protected]>
|
✅ A new PR has been created in buildah to vendor these changes: containers/buildah#6456 |
|
@mtrmac PTAL |
mtrmac
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I don’t see how this substantially helps — instead of pulls blocking indefinitely, the operations will keep running, and allocating file descriptors, and, eventually, CRI-O is going to run out of file descriptors and fail completely. Is that better?
On the implementation aspect of this, if we had to have this, I’d prefer it to be limited to mechanism_gpgme.go.
| go func() { | ||
| mech, keys, err := newEphemeralGPGSigningMechanism(blobs) | ||
| done <- result{mech, keys, err} | ||
| }() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the operations will keep running, and allocating file descriptors, and, eventually, CRI-O is going to run out of file descriptors and fail completely
Not just that fds, this will leak the goroutine and associated memory as well. If I had to guess we run out of goroutines/memory first before we hit the file limit if we have a high ulimit at least.
If there is the possibility of a hang then of course fixing the root cause would be best. Otherwise the operation itself would need to be safely cancelable via context though I guess that doesn't really work via the c interface.
I think for short lived processes such a podman pull such a leak might be fine but with long running ones such as cri-o or podman system service I don't think this improves anything really.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise the operation itself would need to be safely cancelable via context though I guess that doesn't really work via the c interface.
Purely hypothetically, importKeysFromBytes could switch from NewDataBytes to NewDataReader where the Reader returns an error if it notices a cancellation. But
- There’s no guarantee that would be enough to abort the hanging operation, it could very well be hanging while not blocking on a the reader
- We have no way to test that (if we had a reliable reproducer for the original hang, we’d probably be able to debug it)
- It would exercise a rarely-used code path, adding risk
- In some of the CRI-O logs, we see gpgme referring to completely unrelated file descriptors; it seems something is losing track of file descriptor ownership and either inventing FD numbers, or closing something it shouldn’t. All of those risks to CRI-O’s operation, with ~unpredictable results, would continue to exist.
I continue to think the best next step is to have a gpgme expert investigate the existing logs, or recommend a debugging approach. (I wouldn’t rule out a bug in some entirely unrelated code within the CRI-O process, but the reporting of hangs directly related to signature verification is suggestive.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have no way to test that (if we had a reliable reproducer for the original hang, we’d probably be able to debug it)
I have a somewhat reliable reproducer, however it requires performing a cluster upgrade currently. I have not been able to reproduce it by cordoning/draining/rebooting/uncordoning single nodes (yet). If I can be of assistance let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I wouldn’t rule out a bug in some entirely unrelated code within the CRI-O process, but the reporting of hangs directly related to signature verification is suggestive.)
Podman pull executes the same code part for verification, right? Do we have any reports of a podman pull hanging? I at least am unaware of such a bug but that doesn't mean much either.
One quick thing to note since we have been bitten by this before is gpgme thread safe? And by that I mean can the context be created on one thread and the the other function such as import be called on another thread? Because I see no LockOSThread() calls there anywhere which means go can move to another thread between each C function call. That is something that is very rare and very hard to reproduce from my experience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://www.gnupg.org/documentation/manuals/gpgme/Multi-Threading.html seems to suggest there are quite a few things that need to be taken care of and I am not sure how much the go bindings account for these things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@berendiwema Thanks! Could you share the steps you used to reproduce it during the upgrade? I have the resources to run a cluster upgrade, I think I should be able to reproduce it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://www.gnupg.org/documentation/manuals/gpgme/Multi-Threading.html seems to suggest there are quite a few things that need to be taken care of
Thanks for the pointer, I’m not sure if, or when last, I looked at that.
AFAICT we are following all of that, except for the strerror_r requirement. proglottis/gpgme#44 should fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One quick thing to note since we have been bitten by this before is gpgme thread safe? And by that I mean can the context be created on one thread and the the other function such as import be called on another thread? Because I see no LockOSThread() calls there anywhere which means go can move to another thread between each C function call.
(I didn’t think that could be an issue — all our use of gpgme is essentially single-threaded — but) there might well be something there: proglottis/gpgme#45 . I’d appreciate an independent look at the errno part of that report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@QiWang19 Really nothing special. There are several clusters using signature validation on all images, and just upgrading a cluster causes hangs in some cases. It might be the amount of workload on a cluster or just some badluck within the infrastructure causing gpgme to hang.
|
There are viable avenues to pursue investigating the root cause (e.g. proglottis/gpgme#48 (comment) as a minimal public reference), so I don’t think we want to commit to this workaround, at least for now. |
Since the root cause of indefinite hanging during GPG verification operations (https://issues.redhat.com/browse/OCPBUGS-57893) is currently unknown, add a timeout workaround to mitigate the impact. This prevents CRI-O image pulls with signature verification from hanging indefinitely.