Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 26 additions & 1 deletion image/signature/policy_eval_signedby.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,16 @@ import (
"context"
"errors"
"fmt"
"time"

digest "github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/multierr"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/manifest"
)

const importKeyTimeOut = 60 * time.Second

func (pr *prSignedBy) isSignatureAuthorAccepted(ctx context.Context, image private.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
switch pr.KeyType {
case SBKeyTypeGPGKeys:
Expand Down Expand Up @@ -40,7 +43,8 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(ctx context.Context, image priva
}

// FIXME: move this to per-context initialization
mech, trustedIdentities, err := newEphemeralGPGSigningMechanism(data)
// Import the keys with a 60s timeout to avoid hanging indefinitely. see issues.redhat.com/browse/OCPBUGS-57893
mech, trustedIdentities, err := newEphemeralGPGSigningMechanismWithTimeout(data, importKeyTimeOut)
if err != nil {
return sarRejected, nil, err
}
Expand Down Expand Up @@ -114,3 +118,24 @@ func (pr *prSignedBy) isRunningImageAllowed(ctx context.Context, image private.U
}
return false, summary
}

func newEphemeralGPGSigningMechanismWithTimeout(blobs [][]byte, timeout time.Duration) (signingMechanismWithPassphrase, []string, error) {
type result struct {
mech signingMechanismWithPassphrase
keys []string
err error
}
done := make(chan result, 1)

go func() {
mech, keys, err := newEphemeralGPGSigningMechanism(blobs)
done <- result{mech, keys, err}
}()
Comment on lines +130 to +133
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the operations will keep running, and allocating file descriptors, and, eventually, CRI-O is going to run out of file descriptors and fail completely

Not just that fds, this will leak the goroutine and associated memory as well. If I had to guess we run out of goroutines/memory first before we hit the file limit if we have a high ulimit at least.

If there is the possibility of a hang then of course fixing the root cause would be best. Otherwise the operation itself would need to be safely cancelable via context though I guess that doesn't really work via the c interface.

I think for short lived processes such a podman pull such a leak might be fine but with long running ones such as cri-o or podman system service I don't think this improves anything really.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise the operation itself would need to be safely cancelable via context though I guess that doesn't really work via the c interface.

Purely hypothetically, importKeysFromBytes could switch from NewDataBytes to NewDataReader where the Reader returns an error if it notices a cancellation. But

  • There’s no guarantee that would be enough to abort the hanging operation, it could very well be hanging while not blocking on a the reader
  • We have no way to test that (if we had a reliable reproducer for the original hang, we’d probably be able to debug it)
  • It would exercise a rarely-used code path, adding risk
  • In some of the CRI-O logs, we see gpgme referring to completely unrelated file descriptors; it seems something is losing track of file descriptor ownership and either inventing FD numbers, or closing something it shouldn’t. All of those risks to CRI-O’s operation, with ~unpredictable results, would continue to exist.

I continue to think the best next step is to have a gpgme expert investigate the existing logs, or recommend a debugging approach. (I wouldn’t rule out a bug in some entirely unrelated code within the CRI-O process, but the reporting of hangs directly related to signature verification is suggestive.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have no way to test that (if we had a reliable reproducer for the original hang, we’d probably be able to debug it)

I have a somewhat reliable reproducer, however it requires performing a cluster upgrade currently. I have not been able to reproduce it by cordoning/draining/rebooting/uncordoning single nodes (yet). If I can be of assistance let me know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I wouldn’t rule out a bug in some entirely unrelated code within the CRI-O process, but the reporting of hangs directly related to signature verification is suggestive.)

Podman pull executes the same code part for verification, right? Do we have any reports of a podman pull hanging? I at least am unaware of such a bug but that doesn't mean much either.

One quick thing to note since we have been bitten by this before is gpgme thread safe? And by that I mean can the context be created on one thread and the the other function such as import be called on another thread? Because I see no LockOSThread() calls there anywhere which means go can move to another thread between each C function call. That is something that is very rare and very hard to reproduce from my experience.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://www.gnupg.org/documentation/manuals/gpgme/Multi-Threading.html seems to suggest there are quite a few things that need to be taken care of and I am not sure how much the go bindings account for these things.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@berendiwema Thanks! Could you share the steps you used to reproduce it during the upgrade? I have the resources to run a cluster upgrade, I think I should be able to reproduce it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://www.gnupg.org/documentation/manuals/gpgme/Multi-Threading.html seems to suggest there are quite a few things that need to be taken care of

Thanks for the pointer, I’m not sure if, or when last, I looked at that.

AFAICT we are following all of that, except for the strerror_r requirement. proglottis/gpgme#44 should fix that.

Copy link
Contributor

@mtrmac mtrmac Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One quick thing to note since we have been bitten by this before is gpgme thread safe? And by that I mean can the context be created on one thread and the the other function such as import be called on another thread? Because I see no LockOSThread() calls there anywhere which means go can move to another thread between each C function call.

(I didn’t think that could be an issue — all our use of gpgme is essentially single-threaded — but) there might well be something there: proglottis/gpgme#45 . I’d appreciate an independent look at the errno part of that report.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@QiWang19 Really nothing special. There are several clusters using signature validation on all images, and just upgrading a cluster causes hangs in some cases. It might be the amount of workload on a cluster or just some badluck within the infrastructure causing gpgme to hang.


select {
case <-time.After(timeout):
return nil, nil, fmt.Errorf("GPG/OpenPGP key import timed out after %s", timeout)
case r := <-done:
return r.mech, r.keys, r.err
}
}
Loading