Skip to content

Add workflow to check for broken urls #293

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,13 @@ watch: environment ## Watch for changes, rebuild, and preview with hot reload.
"nodemon --watch content --watch docs --ext adoc,yml --exec 'make dev'" \
"make preview"

##@ Verification

.PHONY: verify-broken-links
verify-broken-links: ## Verify broken GitHub links in .adoc files.
go run tools/verifybrokenlinks/main.go -docs-dir=docs -max-parallel=10

##@ CI

.PHONY: ci
ci: environment gh-pages dev ## Run the build for continuous integration.
ci: environment gh-pages dev verify-broken-links ## Run the build and verification for continuous integration.
2 changes: 1 addition & 1 deletion docs/next/modules/en/pages/security/slsa.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ https://slsa.dev/spec/v1.0/about[SLSA] is a set of incrementally adoptable guide
* The release process and the provenance generation are run in isolation on an ephemeral environment provided by GitHub-hosted runners.
* The provenance of the {product_name} container images can be verified using the official https://github.com/slsa-framework/slsa-verifier[SLSA verifier tool].
* The provenance generation workflows run on ephemeral and isolated virtual machines, which are fully managed by GitHub.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://github.com/sigstore/cosign/blob/main/KEYLESS.md[keyless] signing procedure.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://docs.sigstore.dev/cosign/signing/overview/[keyless] signing procedure.
* The https://github.com/slsa-framework/slsa-github-generator[SLSA GitHub Generator] runs on separate virtual machines than the build and release process, so that the {product_name} build scripts don't have access to the signing secrets.

== Isolation
Expand Down
4 changes: 2 additions & 2 deletions docs/next/modules/en/pages/user/clusters.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ AWS EC2 RKE2::
+
Before creating an AWS+RKE2 workload cluster, it is required to build an AMI for the RKE2 version that is going to be installed on the cluster. You can follow the steps in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/image-builder#aws[RKE2 image-builder README] to build the AMI.
+
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters. The https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/internal[internal folder] contains cluster templates to deploy an RKE2 cluster on AWS using the internal cloud provider, and the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/external[external folder] contains the cluster templates to deploy a cluster with the external cloud provider.
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/examples/templates/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters.
+
We will use the `internal` one for this guide, however the same steps apply for `external`.
+
Expand Down Expand Up @@ -287,7 +287,7 @@ Docker Kubeadm::

vSphere RKE2::
+
Before creating a vSphere+RKE2 workload cluster, it is required to have a VM template with the necessary RKE2 binaries and dependencies. The template should already include RKE2 binaries if operating in an air-gapped environment, following the https://docs.rke2.io/install/airgap#tarball-method[tarball method]. You can find additional configuration details in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/vmware[CAPRKE2 repository].
Before creating a vSphere+RKE2 workload cluster, it is required to have a VM template with the necessary RKE2 binaries and dependencies. The template should already include RKE2 binaries if operating in an air-gapped environment, following the https://docs.rke2.io/install/airgap#tarball-method[tarball method]. You can find additional configuration details in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/examples/templates/vmware[CAPRKE2 repository].
+
To generate the YAML for the cluster, do the following:
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ AWS RKE2::
+
Before creating an AWS+RKE2 workload cluster, it is required to build an AMI for the RKE2 version that is going to be installed on the cluster. You can follow the steps in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/image-builder#aws[RKE2 image-builder README] to build the AMI.
+
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters. The https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/internal[internal folder] contains cluster templates to deploy an RKE2 cluster on AWS using the internal cloud provider, and the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/external[external folder] contains the cluster templates to deploy a cluster with the external cloud provider.
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/examples/templates/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters.
+
We will use the `internal` one for this guide, however the same steps apply for `external`.
+
Expand All @@ -63,7 +63,7 @@ export AWS_REGION="aws-region"
export AWS_AMI_ID="ami-id"

clusterctl generate cluster cluster1 \
--from https://github.com/rancher/cluster-api-provider-rke2/blob/main/samples/aws/internal/cluster-template.yaml \
--from https://github.com/rancher/cluster-api-provider-rke2/blob/release-0.5/samples/aws/internal/cluster-template.yaml \
> cluster1.yaml
----
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ rancherTurtles:
rancher-webhook: # an existing rancher installation keeps rancher webhooks after disabling embedded-capi
cleanup: true # indicates that the remaining rancher webhooks be removed (default: true)
kubectlImage: registry.k8s.io/kubernetes/kubectl:v1.28.0 # indicates the image to use for pre-install cleanup (default: Kubernetes container image registry)
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/release-1.5/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
label: true # indicates that the label will be added (default: true)
managementv3-cluster: # rancher will use `clusters.management.cattle.io` to represent an imported capi cluster
enabled: false # if false, indicates that `clusters.provisioning.cattle.io` resources will be used (default: false)
Expand Down
2 changes: 1 addition & 1 deletion docs/v0.10/modules/en/pages/security/slsa.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ https://slsa.dev/spec/v1.0/about[SLSA] is a set of incrementally adoptable guide
* The release process and the provenance generation are run in isolation on an ephemeral environment provided by GitHub-hosted runners.
* The provenance of the {product_name} container images can be verified using the official https://github.com/slsa-framework/slsa-verifier[SLSA verifier tool].
* The provenance generation workflows run on ephemeral and isolated virtual machines, which are fully managed by GitHub.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://github.com/sigstore/cosign/blob/main/KEYLESS.md[keyless] signing procedure.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://docs.sigstore.dev/cosign/signing/overview/[keyless] signing procedure.
* The https://github.com/slsa-framework/slsa-github-generator[SLSA GitHub Generator] runs on separate virtual machines than the build and release process, so that the {product_name} build scripts don't have access to the signing secrets.

== Isolation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ AWS RKE2::
+
Before creating an AWS+RKE2 workload cluster, it is required to build an AMI for the RKE2 version that is going to be installed on the cluster. You can follow the steps in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/image-builder#aws[RKE2 image-builder README] to build the AMI.
+
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters. The https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/internal[internal folder] contains cluster templates to deploy an RKE2 cluster on AWS using the internal cloud provider, and the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/external[external folder] contains the cluster templates to deploy a cluster with the external cloud provider.
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/examples/templates/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters.
+
We will use the `internal` one for this guide, however the same steps apply for `external`.
+
Expand All @@ -61,7 +61,7 @@ export AWS_SSH_KEY_NAME="aws-ssh-key" export AWS_REGION="aws-region"
export AWS_AMI_ID="ami-id"

clusterctl generate cluster cluster1 \
--from https://github.com/rancher/cluster-api-provider-rke2/blob/main/samples/aws/internal/cluster-template.yaml \
--from https://github.com/rancher/cluster-api-provider-rke2/blob/release-0.5/samples/aws/internal/cluster-template.yaml \
> cluster1.yaml
----
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ rancherTurtles:
rancher-webhook: # an existing rancher installation keeps rancher webhooks after disabling embedded-capi
cleanup: true # indicates that the remaining rancher webhooks be removed (default: true)
kubectlImage: registry.k8s.io/kubernetes/kubectl:v1.30.0 # indicates the image to use for pre-install cleanup (default: Kubernetes container image registry)
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/release-1.5/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
label: true # indicates that the label will be added (default: true)
managementv3-cluster: # rancher will use `clusters.management.cattle.io` to represent an imported capi cluster
enabled: false # if false, indicates that `clusters.provisioning.cattle.io` resources will be used (default: false)
Expand Down
2 changes: 1 addition & 1 deletion docs/v0.11/modules/en/pages/security/slsa.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ https://slsa.dev/spec/v1.0/about[SLSA] is a set of incrementally adoptable guide
* The release process and the provenance generation are run in isolation on an ephemeral environment provided by GitHub-hosted runners.
* The provenance of the {product_name} container images can be verified using the official https://github.com/slsa-framework/slsa-verifier[SLSA verifier tool].
* The provenance generation workflows run on ephemeral and isolated virtual machines, which are fully managed by GitHub.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://github.com/sigstore/cosign/blob/main/KEYLESS.md[keyless] signing procedure.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://docs.sigstore.dev/cosign/signing/overview/[keyless] signing procedure.
* The https://github.com/slsa-framework/slsa-github-generator[SLSA GitHub Generator] runs on separate virtual machines than the build and release process, so that the {product_name} build scripts don't have access to the signing secrets.

== Isolation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ AWS RKE2::
+
Before creating an AWS+RKE2 workload cluster, it is required to build an AMI for the RKE2 version that is going to be installed on the cluster. You can follow the steps in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/image-builder#aws[RKE2 image-builder README] to build the AMI.
+
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters. The https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/internal[internal folder] contains cluster templates to deploy an RKE2 cluster on AWS using the internal cloud provider, and the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/external[external folder] contains the cluster templates to deploy a cluster with the external cloud provider.
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/examples/templates/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters.
+
We will use the `internal` one for this guide, however the same steps apply for `external`.
+
Expand All @@ -61,7 +61,7 @@ export AWS_SSH_KEY_NAME="aws-ssh-key" export AWS_REGION="aws-region"
export AWS_AMI_ID="ami-id"

clusterctl generate cluster cluster1 \
--from https://github.com/rancher/cluster-api-provider-rke2/blob/main/samples/aws/internal/cluster-template.yaml \
--from https://github.com/rancher/cluster-api-provider-rke2/blob/release-0.5/samples/aws/internal/cluster-template.yaml \
> cluster1.yaml
----
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ rancherTurtles:
rancher-webhook: # an existing rancher installation keeps rancher webhooks after disabling embedded-capi
cleanup: true # indicates that the remaining rancher webhooks be removed (default: true)
kubectlImage: registry.k8s.io/kubernetes/kubectl:v1.30.0 # indicates the image to use for pre-install cleanup (default: Kubernetes container image registry)
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/release-1.5/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
label: true # indicates that the label will be added (default: true)
managementv3-cluster: # rancher will use `clusters.management.cattle.io` to represent an imported capi cluster
enabled: false # if false, indicates that `clusters.provisioning.cattle.io` resources will be used (default: false)
Expand Down
2 changes: 1 addition & 1 deletion docs/v0.12/modules/en/pages/security/slsa.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ https://slsa.dev/spec/v1.0/about[SLSA] is a set of incrementally adoptable guide
* The release process and the provenance generation are run in isolation on an ephemeral environment provided by GitHub-hosted runners.
* The provenance of the {product_name} container images can be verified using the official https://github.com/slsa-framework/slsa-verifier[SLSA verifier tool].
* The provenance generation workflows run on ephemeral and isolated virtual machines, which are fully managed by GitHub.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://github.com/sigstore/cosign/blob/main/KEYLESS.md[keyless] signing procedure.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://docs.sigstore.dev/cosign/signing/overview/[keyless] signing procedure.
* The https://github.com/slsa-framework/slsa-github-generator[SLSA GitHub Generator] runs on separate virtual machines than the build and release process, so that the {product_name} build scripts don't have access to the signing secrets.

== Isolation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ AWS RKE2::
+
Before creating an AWS+RKE2 workload cluster, it is required to build an AMI for the RKE2 version that is going to be installed on the cluster. You can follow the steps in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/image-builder#aws[RKE2 image-builder README] to build the AMI.
+
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters. The https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/internal[internal folder] contains cluster templates to deploy an RKE2 cluster on AWS using the internal cloud provider, and the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/aws/external[external folder] contains the cluster templates to deploy a cluster with the external cloud provider.
We recommend you refer to the CAPRKE2 repository where you can find a https://github.com/rancher/cluster-api-provider-rke2/tree/main/examples/templates/aws[samples folder] with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters.
+
We will use the `internal` one for this guide, however the same steps apply for `external`.
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ rancherTurtles:
rancher-webhook: # an existing rancher installation keeps rancher webhooks after disabling embedded-capi
cleanup: true # indicates that the remaining rancher webhooks be removed (default: true)
kubectlImage: registry.k8s.io/kubernetes/kubectl:v1.30.0 # indicates the image to use for pre-install cleanup (default: Kubernetes container image registry)
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
rancher-kubeconfigs: # with capi 1.5.0 and greater, secrets for kubeconfigs must contain a specific label. See https://github.com/kubernetes-sigs/cluster-api/blob/release-1.5/docs/book/src/developer/providers/migrations/v1.4-to-v1.5.md#other
label: true # indicates that the label will be added (default: true)
managementv3-cluster: # rancher will use `clusters.management.cattle.io` to represent an imported capi cluster
enabled: false # if false, indicates that `clusters.provisioning.cattle.io` resources will be used (default: false)
Expand Down
2 changes: 1 addition & 1 deletion docs/v0.13/modules/en/pages/security/slsa.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ https://slsa.dev/spec/v1.0/about[SLSA] is a set of incrementally adoptable guide
* The release process and the provenance generation are run in isolation on an ephemeral environment provided by GitHub-hosted runners.
* The provenance of the {product_name} container images can be verified using the official https://github.com/slsa-framework/slsa-verifier[SLSA verifier tool].
* The provenance generation workflows run on ephemeral and isolated virtual machines, which are fully managed by GitHub.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://github.com/sigstore/cosign/blob/main/KEYLESS.md[keyless] signing procedure.
* The provenance signing secrets are ephemeral and are generated through Sigstore's https://docs.sigstore.dev/cosign/signing/overview/[keyless] signing procedure.
* The https://github.com/slsa-framework/slsa-github-generator[SLSA GitHub Generator] runs on separate virtual machines than the build and release process, so that the {product_name} build scripts don't have access to the signing secrets.

== Isolation
Expand Down
Loading