Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a "skipVerify" attribute to version #12

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ANeumann82
Copy link
Member

@ANeumann82 ANeumann82 commented Aug 7, 2020

This will allow us to add old versions to the index which would fail the kudo package verify.

We should discuss this though - maybe there's a better option?

Related to #11

@ANeumann82 ANeumann82 requested a review from nfnt as a code owner August 7, 2020 09:16
…ld fail the kudo package verify can be added to the index

Signed-off-by: Andreas Neumann <[email protected]>
@nfnt
Copy link
Member

nfnt commented Aug 17, 2020

The verification should be mandatory, packages in operators are actually required to pass kubectl kudo package verify (https://github.com/kudobuilder/operators/blob/master/CONTRIBUTING.md#requirements-for-new-operators) but that was never enforced.

@ANeumann82
Copy link
Member Author

ANeumann82 commented Aug 17, 2020

The verification should be mandatory, packages in operators are actually required to pass kubectl kudo package verify (https://github.com/kudobuilder/operators/blob/master/CONTRIBUTING.md#requirements-for-new-operators) but that was never enforced.

Well, yes - I agree that new operators should pass the package verify with the then current KUDO version.

The problem I see is this:

Operator V1 release => uses KUDO 0.10, passed validation with KUDO 0.10
Operator V2 release => uses KUDO 0.13, passed validation with KUDO 0.13
Operator V3 release => uses KUDO 0.16, passed validation with KUDO 0.16

Now the repository needs to be rebuild, for whatever reason.
KITT uses the latest KUDO version for package verify, but there were added checks between KUDO 0.13 and KUDO 0.16 which were adressed in OperatorV3.

OperatorV1 => fails package verify with KUDO 0.16 but passes with KUDO 0.10

The repository can not be rebuild.

So, we could either:

  1. Use a specific KUDO version to run package verify
  2. Skip the verify step for rebuilding packages
  3. Skip old operator version when rebuilding the repository

sooo:

  1. would be the optimal solution, but quite a lot of work
  2. would work, i guess
  3. not really an option because we should be able to rebuild the full index at any time

This issue came up because of kudobuilder/operators-index#11 and kudobuilder/operators-index#10

But from what I can see, we only the the package verify in the Continuous Integration step, not in the Continuous Deployment? So in these cases we could ignore the failing tests and merge them, and the index would be rebuild correctly?

@nfnt
Copy link
Member

nfnt commented Aug 17, 2020

Oh, goooood point!
Yeah, that is a problem that kitt should solve if it should be used support repositories that provide operator packages for multiple versions of KUDO. Considering that KUDO introduces incompatible changes to either its validation or its packaging. While KUDO itself would have a backwards-compatibility policy (e.g. backwards compatibility with n-2 versions), ideally, kitt should allow to manage repositories with packages that might only run on outdated versions of KUDO.
Hmm, I feel that a skipVerify attribute isn't going to cut it here. Using the kudoVersion field in the operator.yaml to determine which checks to run seems like a better approach. Much more complicated too :)

@nfnt
Copy link
Member

nfnt commented Oct 14, 2020

Let's close this. With kudobuilder/operators-index#12 it is possible to distinguish between validation of packages and if they could be added to a repository. This allows us to override the validation step for old versions. Bundling multiple or even all versions of KUDO to be able to verify against them will cause more trouble, let's use this simple workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants