Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configurable signing algorithms #74

Open
ret2libc opened this issue Jan 9, 2024 · 22 comments · May be fixed by #424
Open

Allow configurable signing algorithms #74

ret2libc opened this issue Jan 9, 2024 · 22 comments · May be fixed by #424
Labels
enhancement New feature or request

Comments

@ret2libc
Copy link
Contributor

ret2libc commented Jan 9, 2024

Description

@tetsuo-cpp filed similar issues under Cosign and Rekor. We realise there's a lot of overlap in maintainers, but wanted to make sure that we discuss each project that we plan to touch. Apologies if this feels a bit spammy.

Hi there! At Trail of Bits, we're looking at potentially implementing part of the Configurable Crypto Algorithms proposal (specifically Phase 1). We wanted to float this idea to each of the relevant Sigstore sub-projects so we can hash out the details in a more concrete way.

Across the Sigstore stack, we default to using ECDSA for signatures and SHA256 for hashing. There's more detail in the linked proposal but there are a number of motivations for wanting to customise the signatures that are generated, including paving the way for post-quantum signatures. The proposed design includes having a "supported algorithm" registry (perhaps this can go in the Protobuf specs) that enumerates the approved signature/hash algorithm combinations. We specifically don't want to allow arbitrary mixing and matching of signature and hash algorithm to avoid some of the security pitfalls listed in the proposal.

For sigstore-go, we want to support this set of approved signing algorithms. This can be as simple as a --signing-algorithm flag. As a first pass, we'd like to support ECDSA with SHA256 and SHA384 as well as EdDSA, with ECDSA-SHA256 remaining as the default.

@ret2libc ret2libc added the enhancement New feature or request label Jan 9, 2024
@steiza
Copy link
Member

steiza commented Jan 9, 2024

Hello! I think this should be relatively straightforward as sigstore-go today just supports verification, not signing. We can add additional signing algorithm support as needed to things like pkg/verify/signature.go and pkg/root/trusted_root.go.

@woodruffw
Copy link
Member

xref: sigstore/protobuf-specs#189 will ultimately standardize the full registry of signing algorithms, but doesn't block initial work here (since we know a couple of ECDSA variants that'll need verification support already).

@tetsuo-cpp
Copy link

tetsuo-cpp commented Jan 14, 2024

One requirement of this work is that verifying clients will need to take information about what signing algorithm was used to generate the signature since we can no longer assume SHA-256.

I think that this information can be part of the Sigstore bundle's VerificationMaterials, but each client including sigstore-go will require a --signing-algorithm flag for the non-bundle verification path. In the bundle spec, it might make sense to make this field optional and have clients fallback to ECDSA-SHA256 to avoid breaking changes.

Does this seem reasonable to you: @steiza @haydentherapper?

@ret2libc
Copy link
Contributor Author

{
  "mediaType": "application/vnd.dev.sigstore.bundle+json;version=0.2",
  "verificationMaterial": {
     ...
  },
  "messageSignature": {
    "messageDigest": {
      "algorithm": "SHA2_256",
      "digest": "Xd/81x5QNKXw3pdWCvhoA2H4aS7Yd9F8hxlfyUtUVwg="
    },
    "signature": "MEQCIEeH9Ktf5pkHgdksH7mJCm4Jl66LemacsZHQ0VLKpgPOAiBsSM+neKdZE3l3AhxII1RuQOSKEk+TZRCspMjhvPxB0g=="
  }
}

@tetsuo-cpp are you talking about the signature algorithm or the hash one? I think the hash algo is already included in messageDigest in the bundle file. For the signature algorithm, instead, you could probably already deduce it from the Public Key included in the certificate (which is in the bundle), however even adding a "SignatureAlgorithm" in the bundle might not be ideal?

I believe we want the signature algorithm to come from an external policy, because you can't trust the file itself. If I sign a file with ECDSA-SHA256 today and in ten years from now ECDSA-SHA256 is broken, you want to make sure that the signature are from e.g. ED25519 and not ECDSA-SHA256, even though the file says so.

@tetsuo-cpp
Copy link

tetsuo-cpp commented Jan 15, 2024

@tetsuo-cpp are you talking about the signature algorithm or the hash one? I think the hash algo is already included in messageDigest in the bundle file.

I don't think we can use that because you can also get a DSSE envelope in the bundle. From the spec:

        oneof content {
                dev.sigstore.common.v1.MessageSignature message_signature = 3 [(google.api.field_behavior) = REQUIRED];
                // A DSSE envelope can contain arbitrary payloads.
                // Verifiers must verify that the payload type is a
                // supported and expected type. This is part of the DSSE
                // protocol which is defined here:
                // <https://github.com/secure-systems-lab/dsse/blob/master/protocol.md>
                io.intoto.Envelope dsse_envelope = 4 [(google.api.field_behavior) = REQUIRED];
        }

Since we need this information in both cases, I figured it made sense to add it into the bundle (and we should wire any code that uses the message digest algorithm, to refer to the signing algorithm instead). That has the side-effect of ensuring that the bundles don't refer to a signing/hash combination that isn't valid in the Sigstore registry.

I believe we want the signature algorithm to come from an external policy, because you can't trust the file itself. If I sign a file with ECDSA-SHA256 today and in ten years from now ECDSA-SHA256 is broken, you want to make sure that the signature are from e.g. ED25519 and not ECDSA-SHA256, even though the file says so.

To me, this is two separate things: knowing what the signature algorithm is and restricting what signature algorithms are allowed. I'm proposing that the bundle format will document what signing algorithm was used to produce the signature so that a client knows how to verify it. That leaves the door open for clients to toggle what signing algorithms are allowed so that you can restrict this similar to how we allow them to be restricted in Fulcio and Rekor via --client-signing-algorithms. What do you think?

@ret2libc
Copy link
Contributor Author

cc @woodruffw I think similarly to rekor, we want to limit the flexibility to the regular "hashedrekord" types, which I'm not sure include dsse_envelope. If that's the case, for dsse_envelope case we can just assume the default is whatever is used right now, while the flexibility is allowed only in the case of message_signature.

@haydentherapper
Copy link
Contributor

Can you use the envelope hash from the Rekor entry?

@ret2libc
Copy link
Contributor Author

Working on this again, here's the plan for adding support for different signing algorithms:

  • extend VerifierConfig
    type VerifierConfig struct { // nolint: revive
    to include LoadOptions, so that verify.NewSignedEntityVerifier can be called with custom signer/verifier load options to enable specific configurations (e.g. WithEd25519ph or WithRSAPSSOpts).
  • modify VerificationMaterial https://github.com/sigstore/protobuf-specs/blob/8c3f6a98603a0bed9ba2aad5200a77e780bcc247/protos/sigstore_bundle.proto#L59 to include the signing algorithm and options used to sign the artifact. It's probably enough to have a PublicKeyDetails field (https://github.com/sigstore/protobuf-specs/blob/main/protos/sigstore_common.proto#L62) and from that we can determine all the options, I think.
  • modify verify.VerificationContent
    type VerificationContent interface {
    CompareKey(any, root.TrustedMaterial) bool
    ValidAtTime(time.Time, root.TrustedMaterial) bool
    Certificate() *x509.Certificate
    PublicKey() PublicKeyProvider
    }
    to include a GetSigningAlgorithm function to return either a default value (ECDSA SHA-256) or the PublicKeyDetails value in the bundle.
  • modify getSignatureVerifier
    func getSignatureVerifier(verificationContent VerificationContent, tm root.TrustedMaterial) (signature.Verifier, error) {
    if leafCert := verificationContent.Certificate(); leafCert != nil {
    // TODO: Inspect certificate's SignatureAlgorithm to determine hash function
    return signature.LoadVerifier(leafCert.PublicKey, crypto.SHA256)
    } else if pk := verificationContent.PublicKey(); pk != nil {
    return tm.PublicKeyVerifier(pk.Hint())
    }
    return nil, fmt.Errorf("no public key or certificate found")
    }
    to use the GetSigningAlgorithm and load the right verifier.

Please let me know if this sounds good or I should think about other steps.
cc @woodruffw @haydentherapper @bobcallaway

@haydentherapper
Copy link
Contributor

Tagging other maintainers @codysoyland @steiza @phillmv @kommendorkapten for visibility.

@kommendorkapten
Copy link
Member

Thanks for putting this together!

I have one concern about modifying the VerificationMaterial in the protobuf-specs as that would lead to in-band communication of signature algorithm, which we avoided on purpose in the protobuf specs for the bundle (but not for the trusted_root.json which is how public keys are distributed for the trusted infrastructure).
For keys used to sign artifacts this is required to be known for a verifier (typically via a known key and algorithm per a policy).

How about the verifier configure the known signature algorithm when preparing the verifier? See

func trustedPublicKeyMaterial(pk crypto.PublicKey) *root.TrustedPublicKeyMaterial {
for an example. The example code uses LoadECDSAVerifier, but the verifier should know the used algorithm for the key, and so can load the correct one.
Here I can see a helper function in sigstore-go that takes a public key and the algorithm to use, and construct the appropriate verifier, this would simplify the plumbing and construction of the trusted material the verifier have to perform.

Then in getSignatureVerifer that you linked above, when calling tm.PublicKeyVerifier(pk.Hint()), the already initialized verifer can be returned.

For the other scenario (using certificate), the TODO mentions that we can just inspect the certificate to get the algorithm to use.

@kommendorkapten
Copy link
Member

To follow up, if a helper function is to be created, there is a lot of code already in the sigstore/signature package for this: https://github.com/sigstore/sigstore/blob/main/pkg/signature

@ret2libc
Copy link
Contributor Author

Then in getSignatureVerifer that you linked above, when calling tm.PublicKeyVerifier(pk.Hint()), the already initialized verifer can be returned.

Sounds good to me!

For the other scenario (using certificate), the TODO mentions that we can just inspect the certificate to get the algorithm to use.

Actually, I think SignatureAlgorithm as suggested in the TODO is wrong. What we want is a verifier that has the correct options for the subject public key, however Signature Algorithm specifies the algorithm/hash that was used to sign the certificate itself (from Fulcio). So for example, even if we use ed25519 public key to sign a blob (with ed25519-ph), the Signature Algorithm is still ecdsa-with-SHA256.

The public key itself nor the cert have enough information to determine how to use the subject public key. We could have a default algorithm for each key that can be automatically determined just from the key type (e.g. ed25519 -> PKIX_ED25519_PH, RSA 2048 -> PKIX_RSA_PSS_2048_SHA256, ECDSA P256 -> PKIX_ECDSA_P256_SHA_256, etc.), however that means we can't generate the correct verifier in case we want special options (e.g. we want to use RSA PSS or other special algorithms). At the moment though, apart from RSA (PKCS1V15 vs PSS) and ED25519 (Pure vs PreHashed) we do not have any other ambiguity.

Do we want to allow for only one signing algorithm for each public key type?

Also, what is the reason to avoid the in-bound communication of the signing algorithm? Please see also the answer in #74 (comment) for some additional comments that were made some time ago wrt this.

@kommendorkapten
Copy link
Member

Actually, I think SignatureAlgorithm as suggested in the TODO is wrong.

Yes, you are right. The comment should mention the to use the public key algorithm. The signature algorithm is as you mentions the issuer's signature algorithm.

At the moment though, apart from RSA (PKCS1V15 vs PSS) and ED25519 (Pure vs PreHashed) we do not have any other ambiguity.

That sounds right.

Also, what is the reason to avoid the in-bound communication of the signing algorithm?

In-band communication can be tricked to use the wrong algorithm but the correct key (algorithm confusion attacks), and I think it makes the overall protocol more complex.
The verifier have access to the public keys, and so ought to know what algorithm to use for each key. IMHO this should also include any padding or pre-hash/pure knowledge. As mentioned in the other thread, this is part of the policy.

For key confusion attacks, the typical example that comes to mind for me is RSA vs HMAC in JWTs, where there are known libraries that happily used to take the RSA key as the HMAC secret. So not letting the attacker pick the algorithm just removes any possibility of such attacks.

@ret2libc
Copy link
Contributor Author

Actually, I think SignatureAlgorithm as suggested in the TODO is wrong.

Yes, you are right. The comment should mention the to use the public key algorithm. The signature algorithm is as you mentions the issuer's signature algorithm.

That means that in the case of verification through a certificate, the SignerVerifier should be created from the leaf certificate publicKey and not through the TrustedMaterial. Or can we still just create the cert from the trusted material by manually computing the pk.Hint and passing it to tm.PublicKeyVerifier?.

Maybe we could first check if tm.PublicKeyVerifier(hint) (where hint is either pk.Hint() or hint for the leafCert.PublicKey) is available and if so load it from there, otherwise rely on just the default verifier for a given key type. In this way a user of sigstore-go can setup the TrustedMaterial to give back the key or, if not configured, just rely on the defaults verifiers. WDYT?

cc @haydentherapper

@kommendorkapten
Copy link
Member

That means that in the case of verification through a certificate, the SignerVerifier should be created from the leaf certificate publicKey and not through the TrustedMaterial.

I'm not really following here. The leaf certificate is present in the bundle, which is then represented as a SignedEntity, so using the VerificationContent().Certificate() would work I think?

If the bundle is signed with a certificate there would be no public key hint available, that is used only when the "raw" keys are used as there are no other trusted metadata available for a key (compared to a certificate where a lot of data exists and is signed by the issuer).

@ret2libc
Copy link
Contributor Author

If the bundle is signed with a certificate there would be no public key hint available, that is used only when the "raw" keys are used as there are no other trusted metadata available for a key (compared to a certificate where a lot of data exists and is signed by the issuer).

My bad on this.

I'm not really following here. The leaf certificate is present in the bundle, which is then represented as a SignedEntity, so using the VerificationContent().Certificate() would work I think?

The problem is that the certificate is only useful as a way to get the public key. It does not bring the information on what options/hash function were used to sign the artifact. Even assuming there is only one valid verifier for a public key type, not having any other info about the signing algorithm that was used is problematic, because we then break compatibility with previously signed artifacts.

For example, if you have a ecdsa-p384 public key, with current cosign you get a signature with ecdsa-p384 + sha256. However, if you then try to validate it with the this new logic (which determines the verifier from the public key) you will get a ecdsa-p384+sha384 verifier, which is not going to validate the old signature. Now, this might be what we want... However, if we have a way to keep track of which signature was used to sign a given artifact, then we can more easily verify signatures. A lack of signature information means the artifact was signed with an older version of cosign which has specific defaults (e.g. sha256 for everything).

Policies can be used to allow this compatibility for some time, while rejecting older signatures after some time. I think without a way to specify the signing algorithm used, we don't have enough information to make the right choice. Unfortunately the certificate is not really useful, unless we store some extra information in the certificate through some x509 extension.

I might be missing some simpler solutions though.

@ret2libc
Copy link
Contributor Author

I'm experimenting with allowing passing a list of valid signing algorithms from a Policy. So each key type can be validated with multiple signing algorithms, if necessary. The correct verifier can be created from SignedEntityVerifier.Verify and passed down to VerifySignature* functions.

Essentially, the client can either specify the set of allowed signing algorithms for each key type or it can rely on compatibility-aware defaults:

defaultCompatSigningAlgorithms := map[signature.PublicKeyId][]v1.PublicKeyDetails{
	signature.RSAPublicKeyId(2048):              {v1.PublicKeyDetails_PKIX_RSA_PKCS1V15_2048_SHA256},
	signature.RSAPublicKeyId(3072):              {v1.PublicKeyDetails_PKIX_RSA_PKCS1V15_3072_SHA256},
	signature.RSAPublicKeyId(4096):              {v1.PublicKeyDetails_PKIX_RSA_PKCS1V15_4096_SHA256},
	signature.ECDSAPublicKeyId(elliptic.P256()): {v1.PublicKeyDetails_PKIX_ECDSA_P256_SHA_256},
	signature.ECDSAPublicKeyId(elliptic.P384()): {v1.PublicKeyDetails_PKIX_ECDSA_P384_SHA_384, v1.PublicKeyDetails_PKIX_ECDSA_P384_SHA_256},
	signature.ECDSAPublicKeyId(elliptic.P521()): {v1.PublicKeyDetails_PKIX_ECDSA_P521_SHA_512, v1.PublicKeyDetails_PKIX_ECDSA_P521_SHA_256},
	signature.ED25519PublicKeyId():              {v1.PublicKeyDetails_PKIX_ED25519_PH, v1.PublicKeyDetails_PKIX_ED25519},
}

then based on the key type we can create a verifier that tries the proper verifiers in order. For example, if the certificate public key is ECDSA-P384, we try to validate first with PKIX_ECDSA_P384_SHA_384 and if that fails we fallback to PKIX_ECDSA_P384_SHA_256 for compatibility with older signatures. At some point we could probably remove this compatibility verification and just rely on one algorithm based on the public key.

Opinions?

@kommendorkapten
Copy link
Member

The problem is that the certificate is only useful as a way to get the public key...

Ah, yes. Thanks, I get what you mean now 👍

This not a trivial problem, and as you said we have a system out there that's not very specified no how it should work, which makes it harder to tighten this up with more controls. Your example mentions P-384/SHA-256 which is not common, but can be real with todays implementation. RSA PKCS#1 v1.5/PSS is a another example you called out that will cause headaches.

My following comment here is probably a bit more high level than this PR, sorry for that but I hope its still valid 😄
(It's probably more relevant for sigstore/sig-clients#16.)

When thinking about cryptographic agility I tend to think of two different methods

  1. The protocol is versioned, and each version have a small set of algorithms, which are uniquely identified by the public key. For RSA there is an OID for PSS not entirely sure if there is one for PKCS#1 v1.5, maybe OID 1.2.840.113549.1.15? The versioning here would be the sigstore bundle version. My thinking is that we can actually take a stance here on PSS vs PKCS#1 v1.5 for the public good instance.
  2. Let the signer and verifier agree on key types and algorithm OOB.

I personally prefer option 1, as it enhances compatibility between clients. This can be implemented as a default policy (per bundle version), and you can configure which bundle versions you support. For private insntances, they can go with option 2 if they want to. The real verification interface does not need to care about which options chosen, as long as there is enough helper functions to make the implementation trivial for going with option 1 (assuming this would be the defined way forward).

I'm experimenting with allowing passing a list of valid signing algorithms from a Policy.

This looks very interesting, and aligns with what I just wrote above I think. I will need to read through in detail what you wrote (your comment was made when I was typing this message).

@haydentherapper
Copy link
Contributor

A lot of good discussion! @kommendorkapten I agree with you on having a versioned set of algorithms we support, which is effectively what we already have with the protobuf-specs. For RSA and ECDSA, I think we should stay as close as possible to the current set without supporting additional combinations of signature alg and digest - add new algorithms (e.g. ml-dsa), not new algorithm variants.

I would ideally like to support RSA-PSS and PKCS1v1.5, but due to limitations in how Go (or more accurately sigstore/sigstore) initializes verifiers, we've always had to pick one or the other. I don't love making this decision to drop support for one or the other only because of library limitations, but if it would lead to a more well-defined set of supported algorithms across all clients, I'd be happy to drop one.

I think we might have answered the need to fallback in a separate discussion on Fulcio.

@kommendorkapten
Copy link
Member

Slightly off topic and a hot take, but I would love to see a new release of protobuf-specs where RSA is deprecated (and of course no mix of sha256 and p384 etc), basically simplifying these a lot.

@haydentherapper
Copy link
Contributor

I'd prefer we still support RSA as we have seen some ecosystems (NuGet) explicitly request support for RSA. Could we instead drop either PKCS#1v1.5 or PSS rather than try to support both? I've seen PSS preferred due to non-determinism but PSS is also far less prevalent throughout ecosystems.

@kommendorkapten
Copy link
Member

Do you remember the discussion with NuGet? There was one in slack where I think they mentioned an issue with a CA chain where the root used ECDSA and the end-entity RSA? If that's the case, using PGI would be hard still.

Second question is if you know if NuGet and the other ecosystems are all on PKCS#1v1.5 (which sadly still appears to be the most common padding scheme) or if there is a mix? Or all using PSS?

What I think is important is if those ecosystems are looking at PGI or private instances?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
6 participants