Skip to content

Conversation

@QiWang19
Copy link
Member

Since the root cause of indefinite hanging during GPG verification operations (https://issues.redhat.com/browse/OCPBUGS-57893) is currently unknown, add a timeout workaround to mitigate the impact. This prevents CRI-O image pulls with signature verification from hanging indefinitely.

…efinite hanging

Since the root cause of indefinite hanging during GPG verification operations (https://issues.redhat.com/browse/OCPBUGS-57893)
is currently unknown, add a timeout workaround to mitigate the impact. This
prevents CRI-O image pulls with signature verification from hanging
indefinitely.

Signed-off-by: Qi Wang <[email protected]>
@github-actions github-actions bot added the image Related to "image" package label Oct 28, 2025
podmanbot pushed a commit to podmanbot/buildah that referenced this pull request Oct 28, 2025
@podmanbot
Copy link

✅ A new PR has been created in buildah to vendor these changes: containers/buildah#6456

@QiWang19
Copy link
Member Author

@mtrmac PTAL

Copy link
Contributor

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

I don’t see how this substantially helps — instead of pulls blocking indefinitely, the operations will keep running, and allocating file descriptors, and, eventually, CRI-O is going to run out of file descriptors and fail completely. Is that better?


On the implementation aspect of this, if we had to have this, I’d prefer it to be limited to mechanism_gpgme.go.

Comment on lines +130 to +133
go func() {
mech, keys, err := newEphemeralGPGSigningMechanism(blobs)
done <- result{mech, keys, err}
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the operations will keep running, and allocating file descriptors, and, eventually, CRI-O is going to run out of file descriptors and fail completely

Not just that fds, this will leak the goroutine and associated memory as well. If I had to guess we run out of goroutines/memory first before we hit the file limit if we have a high ulimit at least.

If there is the possibility of a hang then of course fixing the root cause would be best. Otherwise the operation itself would need to be safely cancelable via context though I guess that doesn't really work via the c interface.

I think for short lived processes such a podman pull such a leak might be fine but with long running ones such as cri-o or podman system service I don't think this improves anything really.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise the operation itself would need to be safely cancelable via context though I guess that doesn't really work via the c interface.

Purely hypothetically, importKeysFromBytes could switch from NewDataBytes to NewDataReader where the Reader returns an error if it notices a cancellation. But

  • There’s no guarantee that would be enough to abort the hanging operation, it could very well be hanging while not blocking on a the reader
  • We have no way to test that (if we had a reliable reproducer for the original hang, we’d probably be able to debug it)
  • It would exercise a rarely-used code path, adding risk
  • In some of the CRI-O logs, we see gpgme referring to completely unrelated file descriptors; it seems something is losing track of file descriptor ownership and either inventing FD numbers, or closing something it shouldn’t. All of those risks to CRI-O’s operation, with ~unpredictable results, would continue to exist.

I continue to think the best next step is to have a gpgme expert investigate the existing logs, or recommend a debugging approach. (I wouldn’t rule out a bug in some entirely unrelated code within the CRI-O process, but the reporting of hangs directly related to signature verification is suggestive.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have no way to test that (if we had a reliable reproducer for the original hang, we’d probably be able to debug it)

I have a somewhat reliable reproducer, however it requires performing a cluster upgrade currently. I have not been able to reproduce it by cordoning/draining/rebooting/uncordoning single nodes (yet). If I can be of assistance let me know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I wouldn’t rule out a bug in some entirely unrelated code within the CRI-O process, but the reporting of hangs directly related to signature verification is suggestive.)

Podman pull executes the same code part for verification, right? Do we have any reports of a podman pull hanging? I at least am unaware of such a bug but that doesn't mean much either.

One quick thing to note since we have been bitten by this before is gpgme thread safe? And by that I mean can the context be created on one thread and the the other function such as import be called on another thread? Because I see no LockOSThread() calls there anywhere which means go can move to another thread between each C function call. That is something that is very rare and very hard to reproduce from my experience.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://www.gnupg.org/documentation/manuals/gpgme/Multi-Threading.html seems to suggest there are quite a few things that need to be taken care of and I am not sure how much the go bindings account for these things.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@berendiwema Thanks! Could you share the steps you used to reproduce it during the upgrade? I have the resources to run a cluster upgrade, I think I should be able to reproduce it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://www.gnupg.org/documentation/manuals/gpgme/Multi-Threading.html seems to suggest there are quite a few things that need to be taken care of

Thanks for the pointer, I’m not sure if, or when last, I looked at that.

AFAICT we are following all of that, except for the strerror_r requirement. proglottis/gpgme#44 should fix that.

Copy link
Contributor

@mtrmac mtrmac Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One quick thing to note since we have been bitten by this before is gpgme thread safe? And by that I mean can the context be created on one thread and the the other function such as import be called on another thread? Because I see no LockOSThread() calls there anywhere which means go can move to another thread between each C function call.

(I didn’t think that could be an issue — all our use of gpgme is essentially single-threaded — but) there might well be something there: proglottis/gpgme#45 . I’d appreciate an independent look at the errno part of that report.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@QiWang19 Really nothing special. There are several clusters using signature validation on all images, and just upgrading a cluster causes hangs in some cases. It might be the amount of workload on a cluster or just some badluck within the infrastructure causing gpgme to hang.

@mtrmac
Copy link
Contributor

mtrmac commented Nov 6, 2025

There are viable avenues to pursue investigating the root cause (e.g. proglottis/gpgme#48 (comment) as a minimal public reference), so I don’t think we want to commit to this workaround, at least for now.

@mtrmac mtrmac closed this Nov 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

image Related to "image" package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants