Skip to content

Upgrade tests are broken due to state store buckets #17923

@rifelpet

Description

@rifelpet

/kind bug

We removed the static state store buckets in kubernetes/test-infra#36266, intending to rely on kubetest2-kops' dynamic creation of buckets per job.

Unfortunately this broke the upgrade jobs:

testgrid.k8s.io/kops-upgrades#Summary

testgrid.k8s.io/kops-upgrades#kops-aws-upgrade-k133-ko133-to-k133-ko133

Error: error reading cluster configuration "e2e-c290ad7a05-61fef.tests-kops-aws.k8s.io": error reading s3://k8s-infra-kops-state-5b3f-20260122004748/e2e-c290ad7a05-61fef.tests-kops-aws.k8s.io/config: Could not retrieve location for AWS bucket k8s-infra-kops-state-5b3f-20260122004748

The nature of the upgrade tests requires the script to call kubetest2-kops multiple times - during initial cluster creation, again after the upgrade to run the normal k/k e2e tests, and again to tear down the cluster in a trap shell function. when configured to create buckets, kubetest2-kops uses non-deterministic names:

timestamp := time.Now().Format("20060102150405")
bucket := fmt.Sprintf("k8s-infra-kops-%s-%s-%s", bucketType, identifier, timestamp)

This means we cant rely on separate kubetest2-kops invocations reusing the same bucket, nor do we have a way to get the first kubetest2-kops invocation's bucket name and pass it to the subsequent invocations.

I think we could simplify the upgrade scenario script down to one kubetest2-kops invocation with --test=exec, passing it another shell script that performs the upgrades. the inner script may need to call another kubetest2-kops invocation to run the k/k e2e tests but I think that should work.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions