diff --git a/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx b/docs/2.0/docs/accountfactory/guides/setup-delegated-repo.mdx
similarity index 100%
rename from docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx
rename to docs/2.0/docs/accountfactory/guides/setup-delegated-repo.mdx
diff --git a/docs/2.0/docs/accountfactory/installation/addingnewrepo.md b/docs/2.0/docs/accountfactory/installation/addingnewrepo.md
index bb248b70cf..7ea6610b80 100644
--- a/docs/2.0/docs/accountfactory/installation/addingnewrepo.md
+++ b/docs/2.0/docs/accountfactory/installation/addingnewrepo.md
@@ -48,4 +48,5 @@ Each of your repositories will contain a Bootstrap Pull Request. Follow the inst
:::info
The bootstrapping pull requests include pre-configured files, such as a `.mise.toml` file that specifies versions of OpenTofu and Terragrunt. Ensure you review and update these configurations to align with your organization's requirements.
+
:::
diff --git a/docs/2.0/docs/pipelines/guides/handling-broken-iac.md b/docs/2.0/docs/pipelines/guides/handling-broken-iac.md
index 53b426f5b0..506cd8fedf 100644
--- a/docs/2.0/docs/pipelines/guides/handling-broken-iac.md
+++ b/docs/2.0/docs/pipelines/guides/handling-broken-iac.md
@@ -1,6 +1,6 @@
# Handling Broken Infrastructure as Code
-When working with Infrastructure as Code (IaC) at scale, you may occasionally encounter broken or invalid configuration files that prevent Terragrunt from successfully running operations. These issues can block entire CI/CD pipeline, preventing even valid infrastructure changes from being deployed.
+When working with Infrastructure as Code (IaC) at scale, you may occasionally encounter broken or invalid configuration files that prevent Terragrunt from successfully running operations. These issues can block the entire CI/CD pipeline, preventing even valid infrastructure changes from being deployed.
This guide presents several strategies for handling broken IaC while keeping your pipelines operational.
@@ -16,13 +16,13 @@ Common causes of broken IaC include:
- Temporary or experimental code
- Resources or modules that have are work in progress
-Depending on the type of run pipeline is executing, broken IaC can fail a pipeline and prevent other, legitimate changes from being deployed. Especially in circumstances where pipelines will trigger a `terragrunt run-all` it is important that all IaC is valid or properly excluded.
+Depending on the type of run pipeline is executing, broken IaC can fail a pipeline and prevent other, legitimate changes from being deployed. Especially in circumstances where pipelines will trigger a `terragrunt run --all` it is important that all IaC is valid or properly excluded.
## Resolution Strategies
Here are several approaches to manage broken IaC, presented in order of preference:
-### 1. Fix the Invalid Code (Preferred Solution)
+### Fix the Invalid Code (Preferred Solution)
The ideal solution is to fix the underlying issues:
@@ -41,7 +41,7 @@ git push
Then create a merge/pull request to apply the fix to your main branch.
-### 2. Remove the Invalid IaC
+### Remove the Invalid IaC
If you can't fix the issue immediately but the infrastructure is no longer needed, you can remove the problematic code:
@@ -55,22 +55,22 @@ git commit -m "Remove deprecated infrastructure module"
git push
```
-### 3. Use a `.terragrunt-excludes` File
+### Use a `.terragrunt-excludes` File
If you wish to keep the broken code as is and simply have it ignored by pipelines and Terragrunt, you can use a `.terragrunt-excludes` file to skip problematic units:
-1. Create a `.terragrunt-excludes` file in the root of your repository:
+Create a `.terragrunt-excludes` file in the root of your repository:
-```
+```text
# .terragrunt-excludes
# One directory per line (no globs)
account/region/broken-module1
account/region/broken-module2
```
-2. Commit this file to your repository, and Terragrunt will automatically exclude these directories when using `run-all`. Note, if you make a change to the code in those units and pipelines triggers a `run` in that directory itself, then the exclude will not be applied.
+Commit this file to your repository, and Terragrunt will automatically exclude these directories when using `run --all`. Note, if you make a change to the code in those units and pipelines triggers a `run` in that directory itself, then the exclude will not be applied.
-### 4. Configure Exclusions with Pipelines Environment Variables
+### Configure Exclusions with Pipelines Environment Variables
If you don't wish to use `.terragrunt-excludes` in the root of the repository, you can create another file in a different location and set the `TG_QUEUE_EXCLUDES_FILE` environment variable to that path. You then use the Pipelines [`env` block](/2.0/reference/pipelines/configurations-as-code/api#env-block) in your `.gruntwork/pipelines.hcl` configuration to set environment variables that control Terragrunt's behavior:
@@ -94,14 +94,14 @@ repository {
When excluding modules, be aware of dependencies:
1. If module B depends on module A, and module A is excluded, you may need to exclude module B as well.
-2. Use `terragrunt graph-dependencies` to visualize your dependency tree.
+2. Use `terragrunt dag graph` to visualize your dependency tree.
## Best Practices
1. **Document exclusions**: Add comments to your `.terragrunt-excludes` file explaining why each directory is excluded.
2. **Track in issue system**: Create tickets for excluded modules that need to be fixed, including any relevant dates/timelines for when they should be revisited.
3. **Regular cleanup**: Periodically review and update your excluded directories.
-4. **Validate locally**: Run `terragrunt hcl-validate` or `terragrunt validate` locally before committing changes.
+4. **Validate locally**: Run `terragrunt hcl validate` or `terragrunt validate` locally before committing changes.
## Troubleshooting
@@ -112,4 +112,4 @@ If you're still experiencing issues after excluding directories:
- Review pipeline logs to confirm exclusions are being applied
- Verify you don't have conflicting environment variable settings
-By implementing these strategies, you can keep your infrastructure pipelines running smoothly while addressing underlying issues in your codebase.
\ No newline at end of file
+By implementing these strategies, you can keep your infrastructure pipelines running smoothly while addressing underlying issues in your codebase.
diff --git a/docs/2.0/docs/pipelines/guides/managing-secrets.md b/docs/2.0/docs/pipelines/guides/managing-secrets.mdx
similarity index 58%
rename from docs/2.0/docs/pipelines/guides/managing-secrets.md
rename to docs/2.0/docs/pipelines/guides/managing-secrets.mdx
index f639f1fc8c..96e8b42bfd 100644
--- a/docs/2.0/docs/pipelines/guides/managing-secrets.md
+++ b/docs/2.0/docs/pipelines/guides/managing-secrets.mdx
@@ -20,12 +20,19 @@ To interact with the GitLab API, Pipelines requires a Machine User with a [Perso
-## Authenticating with AWS
+## Authenticating with Cloud Providers
-Pipelines requires authentication with AWS but avoids long-lived credentials by utilizing [OIDC](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services). OIDC establishes an authenticated relationship between a specific Git reference in a repository and a corresponding AWS role, enabling Pipelines to assume the role based on where the pipeline is executed.
+Pipelines requires authentication with your cloud provider but avoids long-lived credentials by utilizing OIDC (OpenID Connect). OIDC establishes an authenticated relationship between a specific Git reference in a repository and a corresponding cloud provider identity, enabling Pipelines to assume the identity based on where the pipeline is executed.
-The role assumption process operates as follows:
+
+
+
+{/* We use an h3 here instead of a markdown heading to avoid breaking the ToC */}
+Authenticating with AWS
+
+Pipelines uses [OIDC to authenticate with AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services), allowing it to assume an AWS IAM role without long-lived credentials.
+The role assumption process operates as follows:
@@ -41,7 +48,7 @@ sequenceDiagram
AWS STS->>GitHub Actions: Temporary AWS Credentials
```
-For more details, see [GitHub's OIDC documentation](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services).
+For more details, see [GitHub's OIDC documentation for AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services).
@@ -57,25 +64,77 @@ sequenceDiagram
AWS STS->>GitLab CI/CD: Temporary AWS Credentials
```
-For more details, see [GitLab's OIDC documentation](https://docs.gitlab.com/ee/ci/cloud_services/aws/).
+For more details, see [GitLab's OIDC documentation for AWS](https://docs.gitlab.com/ee/ci/cloud_services/aws/).
As a result, Pipelines avoids storing long-lived AWS credentials and instead relies on ephemeral credentials generated by AWS STS. These credentials grant least-privilege access to the resources needed for the specific operation being performed (e.g., read access during a pull/merge request open event or write access during a merge).
+
+
+
+{/* We use an h3 here instead of a markdown heading to avoid breaking the ToC */}
+Authenticating with Azure
+
+Pipelines uses [OIDC to authenticate with Azure](https://learn.microsoft.com/en-us/entra/architecture/auth-oidc), allowing it to obtain access tokens from Entra ID without long-lived credentials.
+
+The authentication process operates as follows:
+
+
+
+
+```mermaid
+sequenceDiagram
+ participant GitHub Actions
+ participant token.actions.githubusercontent.com
+ participant Entra ID
+ GitHub Actions->>token.actions.githubusercontent.com: OpenID Connect Request
+ token.actions.githubusercontent.com->>GitHub Actions: GitHub JWT
+ GitHub Actions->>Entra ID: Request Access Token (Authorization: GitHub JWT)
+ Entra ID->>GitHub Actions: Azure Access Token
+```
+
+For more details, see [GitHub's OIDC documentation for Azure](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure).
+
+
+
+
+```mermaid
+sequenceDiagram
+ participant GitLab CI/CD
+ participant gitlab.com
+ participant Entra ID
+ GitLab CI/CD->>gitlab.com: OIDC ID Token Request with preconfigured audience
+ gitlab.com->>GitLab CI/CD: GitLab JWT
+ GitLab CI/CD->>Entra ID: Request Access Token (Authorization: GitLab JWT)
+ Entra ID->>GitLab CI/CD: Azure Access Token
+```
+
+For more details, see [GitLab's documentation on Azure integration](https://docs.gitlab.com/ee/ci/cloud_services/).
+
+
+
+
+As a result, Pipelines avoids storing long-lived Azure credentials and instead relies on ephemeral access tokens generated by Entra ID. These tokens grant least-privilege access to the resources needed for the specific operation being performed.
+
+
+
+
## Other providers
-If you are managing configurations for additional services using Infrastructure as Code (IaC) tools like Terragrunt, you may need to configure a provider for those services in Pipelines. In such cases, you must supply the necessary credentials for authenticating with the provider. Whenever possible, follow the same principles applied to AWS: use ephemeral credentials, grant only the minimum permissions required, and avoid storing long-lived credentials on disk.
+If you are managing configurations for additional services using Infrastructure as Code (IaC) tools like Terragrunt, you may need to configure a provider for those services in Pipelines. In such cases, you must supply the necessary credentials for authenticating with the provider. Whenever possible, follow the same principles: use ephemeral credentials, grant only the minimum permissions required, and avoid storing long-lived credentials on disk.
### Configuring providers in Terragrunt
For example, consider configuring the [Cloudflare Terraform provider](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs). This provider supports multiple authentication methods to enable secure API calls to Cloudflare services. To authenticate with Cloudflare and manage the associated credentials securely, you need to configure your `terragrunt.hcl` file appropriately.
-First, examine the default AWS authentication provider setup in the root `terragrunt.hcl` file:
+First, examine the default cloud provider authentication setup in the root `root.hcl` file from Gruntwork provided Boilerplate templates:
+
+
-```hcl
+```hcl title="root.hcl"
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
@@ -93,9 +152,29 @@ EOF
}
```
-This provider block is dynamically generated during the execution of any `terragrunt` command and supplies the AWS provider with the required configuration to discover credentials made available by the pipelines.
+This provider block (the value of `contents`) is dynamically generated as the file `provider.tf` during the execution of any `terragrunt` command and supplies the OpenTofu/Terraform AWS provider with the required configuration to discover credentials made available by the pipelines.
-With this approach, no secrets are written to disk. Instead, the AWS provider dynamically retrieves secrets at runtime.
+
+
+
+```hcl
+generate "provider" {
+ path = "provider.tf"
+ if_exists = "overwrite_terragrunt"
+ contents = <
+
+
+With this approach, no secrets are written to disk. Instead, the cloud provider dynamically retrieves secrets at runtime.
According to the Cloudflare documentation, the Cloudflare provider supports several authentication methods. One option involves using the [api_token](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs#api_key) field in the `provider` block, as illustrated in the documentation:
@@ -126,25 +205,42 @@ In this context, `fetch-cloudflare-api-token.sh` is a script designed to retriev
You are free to use any method to fetch the secret, provided it outputs the value to stdout.
-Here are two straightforward examples of how you might fetch the secret:
+Here are straightforward examples of how you might fetch the secret based on your cloud provider:
+
+
+
+
+Using AWS Secrets Manager:
+
+```bash
+aws secretsmanager get-secret-value --secret-id cloudflare-api-token --query SecretString --output text
+```
+
+Using AWS SSM Parameter Store:
+
+```bash
+aws ssm get-parameter --name cloudflare-api-token --query Parameter.Value --output text --with-decryption
+```
+
+Given that Pipelines is already authenticated with AWS for interacting with state, this setup provides a convenient method for retrieving secrets.
-1. Using `aws secretsmanager`:
+
+
- ```bash
- aws secretsmanager get-secret-value --secret-id cloudflare-api-token --query SecretString --output text
- ```
+Using Azure Key Vault:
-2. Using `aws ssm`:
+```bash
+az keyvault secret show --vault-name --name cloudflare-api-token --query value --output tsv
+```
- ```bash
- aws ssm get-parameter --name cloudflare-api-token --query Parameter.Value --output text --with-decryption
- ```
+Given that Pipelines is already authenticated with Azure for interacting with state, this setup provides a convenient method for retrieving secrets.
-Given that Pipelines is already authenticated with AWS for interacting with state, this setup provides a convenient method for retrieving the Cloudflare API token.
+
+
:::
-Alternatively, note that the `api_token` field is optional. Similar to the AWS provider, you can use the `CLOUDFLARE_API_TOKEN` environment variable to supply the API token to the provider at runtime.
+Alternatively, note that the `api_token` field is optional. Similar to cloud provider authentication, you can use the `CLOUDFLARE_API_TOKEN` environment variable to supply the API token to the provider at runtime.
To achieve this, you can update the `provider` block as follows:
@@ -172,6 +268,7 @@ terraform {
}
}
```
+
### Managing secrets
When configuring providers and Pipelines, it's important to store secrets in a secure and accessible location. Several options are available for managing secrets, each with its advantages and trade-offs.
@@ -211,33 +308,62 @@ GitLab CI/CD Variables provide a native way to store secrets for your pipelines.
-#### AWS Secrets Manager
+#### Cloud Provider Secret Stores
+
+Cloud providers offer dedicated secret management services with advanced features and security controls.
+
+
+
+
+**AWS Secrets Manager**
AWS Secrets Manager offers a sophisticated solution for managing secrets. It allows for provisioning secrets in AWS and configuring fine-grained access controls through AWS IAM. It also supports advanced features like secret rotation and access auditing.
**Advantages**:
-- Granular access permissions, ensuring secrets are only accessible when required.
-- Support for automated secret rotation and detailed access auditing.
+- Granular access permissions, ensuring secrets are only accessible when required
+- Support for automated secret rotation and detailed access auditing
**Trade-offs**:
-- Increased complexity in setup and management.
-- Potentially higher costs associated with its use.
+- Increased complexity in setup and management
+- Potentially higher costs associated with its use
Refer to the [AWS Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) for further details.
-#### AWS SSM Parameter Store
+**AWS SSM Parameter Store**
AWS SSM Parameter Store is a simpler and more cost-effective alternative to Secrets Manager. It supports secret storage and access control through AWS IAM, providing a basic solution for managing sensitive data.
**Advantages**:
-- Lower cost compared to Secrets Manager.
-- Granular access control similar to Secrets Manager.
+- Lower cost compared to Secrets Manager
+- Granular access control similar to Secrets Manager
**Trade-offs**:
-- Limited functionality compared to Secrets Manager, such as less robust secret rotation capabilities.
+- Limited functionality compared to Secrets Manager, such as less robust secret rotation capabilities
Refer to the [AWS SSM Parameter Store documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) for additional information.
+
+
+
+**Azure Key Vault**
+
+Azure Key Vault provides a comprehensive solution for managing secrets, keys, and certificates. It offers fine-grained access controls through Azure RBAC and supports advanced features like secret versioning and access auditing.
+
+**Advantages**:
+- Granular access permissions with Azure RBAC and access policies
+- Support for secret versioning, soft-delete, and purge protection
+- Integration with Azure Monitor for detailed audit logs
+- Hardware Security Module (HSM) backed options for enhanced security
+
+**Trade-offs**:
+- Additional setup complexity for RBAC and access policies
+- Costs associated with transactions and HSM-backed vaults
+
+Refer to the [Azure Key Vault documentation](https://learn.microsoft.com/en-us/azure/key-vault/general/overview) for further details.
+
+
+
+
#### Deciding on a secret store
When selecting a secret store, consider the following key factors:
diff --git a/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx b/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx
index 88491ff685..5078e7e77c 100644
--- a/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx
+++ b/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx
@@ -597,6 +597,12 @@ bootstrap = {
You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-.hcl` file.
+ :::tip
+
+ We're using the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state.
+
+ :::
+
:::note Progress Checklist
diff --git a/sidebars/docs.js b/sidebars/docs.js
index 311f531900..328411b870 100644
--- a/sidebars/docs.js
+++ b/sidebars/docs.js
@@ -383,11 +383,6 @@ const sidebar = [
type: "doc",
id: "2.0/docs/pipelines/guides/terragrunt-env-vars",
},
- {
- label: "Setup a Delegated Repository",
- type: "doc",
- id: "2.0/docs/pipelines/guides/setup-delegated-repo",
- },
{
label: "Handling Broken IaC",
type: "doc",
@@ -533,6 +528,11 @@ const sidebar = [
type: "doc",
id: "2.0/docs/accountfactory/guides/delegated-repositories",
},
+ {
+ label: "Setup a Delegated Repository",
+ type: "doc",
+ id: "2.0/docs/accountfactory/guides/setup-delegated-repo",
+ },
{
label: "Adding Collaborators to Delegated Repositories",
type: "doc",
diff --git a/src/redirects.js b/src/redirects.js
index e6f6441866..698ebefc22 100644
--- a/src/redirects.js
+++ b/src/redirects.js
@@ -372,5 +372,9 @@ export const redirects = [
{
from: '/2.0/docs/pipelines/installation/prerequisites/awslandingzone',
to: '/2.0/docs/accountfactory/prerequisites/awslandingzone'
+ },
+ {
+ from: '/2.0/docs/pipelines/guides/setup-delegated-repo',
+ to: '/2.0/docs/accountfactory/guides/setup-delegated-repo'
}
]